You know what makes me cringe? When a professor complains about his not paying attention in class “because they’re on their computers [dramatic pause] facebooking!”
My instinctive response is to ask
Do you know their on facebook and not working on an essay or checking their email or watching sports? Don’t presume to know what your students are doing when they’re not entranced by your presentation.
And just why do you think that is, anyway? Why don’t they need to be engaged with the concepts you’re lecturing about? Hint: it probably has something to do with “you’re *lecturing* about”.
Why do you believe laptops and smartphones in class are evil?
I don’t actually say these things, though. Bad for recruiting faculty into committing their time and energy to transform their instructor-centered lectures into student-centered instruction.
Instead, I just grimace, shake my head a bit, and say,”—” Honestly I don’t really know what to say to spark the conversation that is the first step of changing their misconceptions about computers and smartphones in the classroom.
I have a vision of what I’d like to see in university classes when it comes to technology:
I want every student so engaged with the material and actively constructing their own understanding that they have neither the time nor the desire to disengage to check their smartphones, or
I want to see everyone using their smartphones and laptops for learning: googling things, running simulations, writing a googledoc with the rest of the class, tweeting the expert in the field, finding a Pinterest collection,…
That’s a long way from a grimace and a head shake. What I need are the words, concepts and tools that can bring technology into education in an effective and efficient way.
Which is why I’m so excited about #etmooc. It’s a massive, open, online course (mooc) about educational technology and media, starting in January 2013. I’m interested in the content and tools we’ll be exploring. (Psst — and secretly, I’m interested in watching how the thing runs. If there’s anyone that can figure out how to make a mooc effective, it’s Alec Couros @courosa and the team he’s assembled.)
Each participant (there are over 1200 of us now) will be using their own blog to post reflections, opinions, whatever else he’s got in store for us. I’ll be tagging all my posts with etmooc so their easier to find.
As I’ve mentioned before, the folks at i>clicker lent me a set of the new i>clicker2 clickers. I had a chance to try them out this week when I filled in for an “Astro 101” instructor. I sure learned a lot in that 50 minutes!
Just to refresh your memory, the i>clicker2 (or “ic2” as it’s also called, which is great because the “>” in “i>clicker2” is messing up some of my HTML) unit has the usual A, B, C, D, E buttons for submitting answers to multiple-choice questions. These new clickers (and receiver and software) also allow for numeric answers and alphanumeric answers. That last feature is particularly interesting because it allows instructors to ask ranking or chronological questions. In the old days, like last week, you could display 5 objects, scenarios or events and ask the student to rank them. But you have to adapt the answers because you have only 5 choices. Something like this:
Rank these [somethings] I, II, III, IV and V from [one end] to [the other]:
A) I, II, V, III, IV
B) II, I, IV, III, IV
C) IV, III, IV, I, II
D) III, I, II, IV, V
E) V, II, I, III, IV
These are killer questions for the students. What are they supposed to do? Work out the ranking on the side and then check that their ranking is in your list? What if their ranking isn’t there? Or game the question and work through each of the choices you give and say “yes” or “no”? There is so much needed to get the answer right besides understanding the concept.
That’s what’s so great about the ic2 alphanumeric mode. I asked this question about how the objects in our Galaxy appear to be moving relative to us:
(Allow me a brief astronomy lesson. At this point in writing this post, I think it’ll be important later. Oh well, can’t hurt, right?)
The stars in our Galaxy orbit around the center. The Galaxy isn’t solid, though. Each star moves along its own path, at its own speed. At this point in the term [psst! we’re setting this up so the students will appreciate what the observed, flat rotation curve means: dark matter] there is a clear pattern: the farther the star is from the center of the Galaxy, the slower its orbital speed. That means stars closer to the center than us are moving faster and will “pass us on the inside lane.” When we observe them, they’re moving away from us. Similarly, we’re moving faster than objects farther from the center than we are, so we’re catching up to the ones ahead of us. Before we pass them, we observe them getting closer to us. That means the answer to my ranking question is EDCAB. Notice that location C is the same distance from the center of the Galaxy as us so it’s moving at the same speed as us. Therefore, we’re not moving towards or away from C — it’s the location where we cross from approaching (blueshifted) to receeding (redshifted).
As usual, I displayed the question, gave the students time to think, and then opened the poll. Students submit a 5-character word like “ABCDE”. The ic2 receiver cycles through the top 3 answers so the instructor can see what the students are thinking without revealing the results to the students.
I saw that there was one popular answer with a couple of other, so I decided enough students got the question right that -pair-share wouldn’t be necessary and displayed the results:
In hindsight, I think I jumped the gun on that because, and here’s what I’ve been trying to get to in this post, I was unprepared to analyze the results of the poll. I did think far enough ahead to write down the correct answer, EDCAB, in big letters on my lesson plan. But what do the other answers tell us the students’ grasp of the concept?
In a good, multiple-choice question, you know why each correct choice is correct (yes, there can be more one correct choice) and why each incorrect choice is incorrect. When a student selects an incorrect choice, you can diagnose which part of the concept they’ve missed. The agile instructor can get students to -pair-share to reveal, and hopefully correct, their misunderstanding.
I’m sure that agility is possible with ranking tasks. But I hadn’t anticipated it. So I did the best I could on the fly and said something like,
Good, many of you recognized that the objects farther from the center are moving slower, so we’re moving toward them. And away from the stars closer to the center than us.
[It was at this moment I realized I had no idea what the other answers meant!]
Uh, I notice almost everyone put location C at the middle of the list – good. It’s at the same distance and same speed as us, so we’re not moving away from or towards C.
Oh, and ABCDE? You must have ranked them in the opposite order, not the way I clumsily suggested in the question. [Which, you might notice, is not true. Oops.]
[And the other 15% who entered something else? Sorry, folks…]
Uh, okay then, let’s move on…
What am I getting at here? First, these ranking tasks are awesome. Every answer is valid. None of that “I hope my answer is on the list…” And there’s no short-circuiting the answer by giving the students 5 choices, risking them gaming the answer by working backwards. I know there are lots of Astro 101 instructors already using ranking tasks, probably because of the great collection of tasks available at the University of Nebraska-Lincoln, but using them in class typically means distributing worksheets, possibly collecting them, perhaps asking one of those “old-fashioned” ranking task clicker questions. All that hassle is gone with ic2.
But it’s going to take re-training on the part of the instructor to be prepared for the results. In principle, there are 5! = 120 different 5-character words the students can enter. Now, of course, you don’t have anticipate what each of the 119 incorrect answers mean. But here are my recommendations:
Work out the ranking order ahead of time and write it down, in big letters, where you can see it. It might be easy to remember, “the right answer to this question is choice B” but it’s not easy to remember, “the correct ranking is EDCAB.”
Work out the ranking if the students rank in the opposite order. That could be because they misread the question or the question wasn’t clear. Or it could diagnose their misunderstanding. For example, if I’d asked them to rank the locations from “most-redshifted” to “most-blueshifted”, the opposite order could mean they’re mixing up red- and blue-shift.
Think about the common mistakes students make on this question and work out the rankings. And write those down, along with the corresponding mistakes.
Nothing like hindsight: set up the question so the answer isn’t just 1 swap away from ABCDE. If you had no idea what the answer was, wouldn’t you enter ABCDE?
I hope to try, and write about, some other types of questions with my collection of ic2 clickers. I’ve already tried a demo where students enter their predictions using the numeric mode. But that’s the subject for another post…
Do you use ranking tasks in your class, with ic2 or paper or something else, again? What advice can you offer that will help the instructor be more prepared and agile?
The other day, I participated in a webinar run by Stephanie Chasteen (@sciencegeekgirl on Twitter. If you don’t follow her, you should.) It was called, “Teaching faculty about effective clicker use” and the goals was to help us plan and carry out meetings where we train faculty members to use peer instruction and clickers. Did you get that subtle difference: it was not about how to use clickers (though Stephanie can teach you that, too.) Rather, this webinar was aimed at instructional support people tasked with training their colleagues how to use peer instruction. This was a train the trainers webinar. And it was right up my alley because I’m learning to do that.
And if you think that’s getting meta-, just you wait…
In the midst of reminding us about peer instruction, Stephanie listed characteristics of effective professional development. She gave us the bold words; the interpretation in mine:
collaborative: it’s about sharing knowledge, experiences, ideas, expertise
active: we need to do something, not just sit and listen (or not!)
discipline-oriented: If we want to be able to share, we need some common background. I want to understand what you’re talking about. And I hope you give a damn about what I’m talking about. Coming from the same discipline, like physics or astronomy or biology, is a good start.
instructor-driven: I take this to mean “facilitated”. That is, there’s someone in charge who drives the activity forward.
respectful: So open to interpretation. Here’s my take: everyone in the room should have the opportunity to contribute. And not via the approach, “well if you’ve got something to say, speak up, dammit!” It takes self-confidence and familiarity and…Okay, it takes guts to interrupt a colleague or a conversation to interject your own opinion. Relying on people to do that does not respect their expertise or the time they’ve invested by coming to the meeting.
sustained over time: We’d never expect our students to learn concepts after one exposure to new material. That’s why we give pre-reading and lectures and peer instruction and homework and midterms and…So we shouldn’t expect instructors to transform their teaching styles after one session of training. It requires review and feedback and follow-up workshops and…
Alright, time to switch to another stream for a moment. They’ll cross in a paragraph or two.
I’ve got a big box of shiny new i>clicker2 clickers to try out. I’m pretty excited. I’m also pretty sure the first thing instructors will say is, “What’s with all the new buttons? I thought these things were supposed to be simple! Damn technology being shoved down our [grumble] [grumble] [grumble]” I want to be able reply
Yes, there are more buttons on the i>clicker2. But let me show you an amazing clicker question you can use in your [insert discipline here] classroom…
Can you feel the streams are coming together. Just one more to add:
My CWSEI colleagues and I frequently meet with instructors and other faculty members. We’re dance a delicate dance between telling instructors what to do, drawing out their good and bad experiences, getting them to discover for themselves what could work, (psst: making them think they thought of it themselves). Their time is valuable so when we meet, we need to get things done. We need to run short, effective episodes of professional development. It’s not easy. If only there was a way to practice…
A-ha! Our weekly meetings should be effective professional development led by one of us getting some practice at facilitating. The streams have crossed. I’ll run the next meeting following Stephanie’s advice, modeling Stephanie’s advice, to gather questions so I will be able run an effective workshop on taking advantage of the new features of the i>clicker2. It’s a meta-meeting. Or a meta-meta-meeting?
It’s not like I made any of this up. Or I couldn’t find it if I talked with some people whose job is professional development. Well, I guess I did kind of talk with Stephanie. But there’s a lot to be said for figuring it out for yourself. Or at least starting to figure it out for yourself, and failing, and then recognizing and appreciating what the expert has to say.
And you’ve read enough for now. Watch for another post about how it went.