Category: clickers

Preparing for our peer instruction workshop

It’s Sunday morning. On Tuesday, I’ll be running an all-morning-and-maybe-into-the-afternoon workshop in my department, Physics and Astronomy, at UBC. My science education colleagues and I, all part of the Carl Wieman Science Education Initiative, are working hard to be proactive, rather than reactive, when it comes to transforming the way we (that is, my teaching colleagues, faculty, university, WTH go for it, post-secondary educators) teach science.

The workshop I’m running with my colleague Cynthia Heiner (@cynheiner on Twitter) is about effective peer instruction. Er, think-pair-share. No, clickers. Or…

That’s the first thing I thought carefully about before putting this workshop together (originally for the CWSEI end-of-year conference last April): the title.

This learner-centered instructional technique of posing a multiple-choice question, getting students to individually choose an answer and then pairing up to discuss with each other why they made those choices, most of the world calls it think-pair-share (TPS). Eric Mazur branded it, or at least popularized it, as peer-instruction (PI). My university, like many others, runs these episodes using clickers. So, what to call this workshop? I made a choice and have diligently stuck with it:

Effective Peer Instruction using Clickers

i>clicker classroom response system

My colleagues are calling this a “clicker workshop” but I don’t want to give it that label. You see, about half of 20 people who have registered are grad students. I’m thrilled! One way to transform science education is to train the next generation of instructors. And when they head off into the rest of the world after graduation, some will get academic jobs that include teaching. And some won’t have clickers: they’ll be forced to use – gasp! – colored voting cards.

Many instructors use these coloured ABCD cards instead of clickers.

Like a lot of instructors do. Successfully. I don’t want these eager new faculty members to ever think, “Oh, I can do clickers but you guys don’t have them, so I guess I’ll just lecture.” So, this workshop is about effective peer instruction. Sure, it’s customized to using i>clickers to collect and assess the students votes, but the goal of the workshop is how to “choreograph” an episode of peer instruction so it maximizes student participation, engagement and learning.

To be honest, I’m pretty confident about content of the workshop. I’ve spent a lot of time with, and talking to, Ed Prather and his team from the Center for Astronomy Education at the University of Arizona. And I consider myself fortunate to have regular conversations, 140 characters at a time, with @derekbruff, @RogerFreedman, @RobertTalbert, @jossives, @Patrick_M_Len, @etacar11, @astrocarrie and other tweeps using peer instruction and other learner-centered instructional strategies.

If there’s one aspect of the workshop, and peer instruction, that I don’t feel I have a good handle on, it’s clicker points. With i>clickers, the system records who voted, not just how many chose A, B, C, D or E, so it is simple to reward clicks with points that contribute to each student’s marks. There are lots of options: a point for any click, a point for picking the right answer, both, points only if there is a second vote, no points,… It’s an over-constrained problem with too many competing and complementary factors:

  • students will participate if they get marks
  • unless they perceive the marks are simply for attendance
  • giving too many (any?) marks for right answers inhibits students from listening to their own ideas, relying instead on their supposedly “smarter” neighbours
  • if students engage and contribute to the class, shouldn’t they be rewarded?
  • effective peer instruction promotes learning and success on exams – isn’t that reward enough?
  • what about the voting card people? They can’t give points but they’re successful.
  • Or are they? Everyone in the field is well-aware of “card fade”, the drop in participation throughout the term as students (and the instructor?) loose their enthusiasm for voting.
  • a million other reasons and arguments…

Yeah, I’m struggling. But I took a big step towards clarity last week because of a post by my friend @jossives, “So long clicker participation points“, and a comment by @brianwfrank

I think, for an instructor who is new to running discussions among and with students in lecture, it’s pretty much fine to use points for “clicking”, espceially as a safety net….Ultimately, I think the direction an instructor should likely head is away from points for clicking

I really like that, and it’s the approach I’m going to promote at the workshop. What Brian says echoes my conversation with Ed Prather last week when he said, roughly, if you’re really worried about your policy for handing out clicker marks, you’ve already missed the boat. You have to convince your students that peer instruction promotes learning and success, and keep reminding them, and then “walk the walk” by putting nearly-identical assessments on their homework and exams. Ed, never one to mince words, concluded, “If you’re unwilling to do that, then you can worry about points.” I added, “unwilling, or unable…” Ed can get full participation of his 800 (yes, eight zero zero) student astronomy classes because he has incredible “presence” in the room. Some instructors, especially new ones, struggle with keeping their students focused. Throw in a new teaching technique that the new instructor is still learning, and you can’t blame the students for disengaging. So, clicker points to reward their effort for a few terms, until you are so confident with peer instruction, you don’t need that “safety net.”

There’s one last component of the workshop that I’m nervous about: getting the participants to authentically participate

  • veteran clicker users: I don’t want them to just fall back into their usual routine. I want them to genuinely try new things, like not opening the clicker poll until the students are prepared or, and this one has had the biggest backlash already, turning to the screen and modeling how to answer the questions, perhaps by “acting out” some of the concepts.

    Theatre of Dionysus (by nrares on flickr CC)
  • newcomers: effective peer instruction choreography take some “performance”. You’ve got to put yourself out there and lead the episode. I have to create an environment where the grad students don’t feel like they’re making fools of themselves in front of the faculty.

This will take some gentle yet firm cajoling at the beginning of the workshop. To the veterans, I think I’ll ask them to model our choreography for the benefit of the others, especially the newcomers, so they can get a clear experience of the workshop.

Alright, T-45 hours until the workshop. Tomorrow will be full of last minute details and working out the choreography of our choreography workshop with my co-presenter, Cynthia. Those of you following me on twitter at @polarisdotca will be the first to hear how it went. The rest of you, 1) why aren’t you on twitter? and 2) you’ll have to wait for a follow-up post.

Why should I use peer instruction in my class?

Image: "Lecture Hall," uniinnsbruck, Flickr (CC)

[Update (June 16): Lead author Zdeslav Hrepic pointed me to a follow-up book chapter [PDF] where he and the study co-authors describe using tablet-PCs to counter the problems uncovered in their study. Thanks, Z.]

I’m sure we’ve all heard it from skeptical instructors: Why should I use peer instruction in my class? In response, we often cite Hake’s 6000-student study or the new UBC study by my colleagues Louis, Ellen and Carl. These are still pretty abstract, though: If you use interactive, learner-centered instruction, you can expect your students to better grasp of the concepts.

“Sure, but why?” the instructors ask. “Why does it work?”

I just read a paper that can help answer that question. I ran across it while following a discussion about the Khan Academy videos and whether or not they are good tools for learning. This paper by Hrepic, Zollman and Rebello (2007) asks students in an introductory physics course and physics experts (with M.Sc’s and Ph.D’s) to watch a 15 minute video of a renowned physics educator presenting a topic in physics.

The researchers do a series of pre- and post-tests and interviews with the students and experts to compare their understanding of the concepts covered (or not) in the video. There were some significant differences. A couple that stick in my head. (1) students recalled learning about concepts that were not presented in the video. (2) Only students who knew the correct answers on the pre-test were able to infer the concepts from the video (that is, the questions were not explicitly answered in the video.) The students who did not know the concept before were unable to make the inferences. Like I said, there are significant differences between what the instructor thinks a lecture covers and what the students think is covered.

The paper nicely gives us some suggestions to counter this problem.

And my thoughts about how to use peer instruction to do that.

Making inferences: Experts make more inferences than students. And only students who already know the concepts can infer them from the lecture. Therefore, instructors need to be cautious about relying on students to fill in the blanks.

Some of the best peer instruction questions are the conceptual questions where the answer is not simple recall. No traxoline here, please. Questions that rely on students making inferences are excellent for promoting discussion because it’s likely students will interpret the question differently, make different assumptions and come to different conclusions. <soapbox> All the more reason that students need to first answer clicker questions on their own so they’re prepared to share their inferences. </soapbox>

Prior knowledge: Students’ prior knowledge influences what they perceive and can “distort” their recollection of what the lecturer says. Therefore, it’s essential that the instructor has some idea of what the students already know (particularly their misconceptions) before presenting new material.

A few, introductory clicker questions will reveal the students’ prior knowledge. Sure, maybe these are simple recall questions that won’t generate a lot of discussion. But the students’ responses will inform the agile instructor who can tailor the instruction.

Continuous feedback about students’ understanding: The trail the instructor blazes through the concepts and the path the students follow often diverge during a lecture. The instructor should be continuously gathering and reacting to feedback from the students about their understanding so the instructor can shepherd the students back on track.

Observant instructors can gather critical feedback from the discussions that occur during peer instruction or the students answers on in-class worksheets like the Lecture-Tutorials popular in introductory “Astro 101” classes and other hybrids of the Washington Tutorials. Rather than waiting weeks until after the midterm or final exam to find out students totally missed Concept X, the instructor can discover it within minutes of introducing the topic. Minutes, not weeks! The agile instructor can immediately revisit the difficult concepts. Immediately, not weeks later or never!

I’m much more confident I can answer the skeptical instructor now. “Why should I use clickers in my classroom?” Because they give the students and you to ability to assess the current level of understanding of the concepts. Current, right now, before it’s too late and the house of cards you’re so carefully building come crashing down.

CWSEI End of Year Conference

Every April, at the end of the “school year” at UBC, the Carl Wieman Science Education Initiative (CWSEI) holds a 1-day mini-conference to highlight the past years successes. This year, Acting-Director Sarah Gilbert did a great job organizing the event. (Director CW, himself, is on leave to the White House.) It  attracted a wide range of people, from UBC admin to department heads, interested and involved faculty, Science Teaching and Learning Fellows (STLFs) like myself and grad students interested in science education. The only people not there, I think, were the undergraduate students, themselves. Given that the event was held on the first day after exams finished and the beginning of 4 months of freedom, I’m not surprised at all there weren’t any undergrads. I know I wouldn’t have gone to something like this, back when I was an undergrad.

Part 1: Overview and Case Studies

The day started with an introduction and overview by Sarah, followed by 4 short “case studies” where 4 faculty members who are heavily involved in transforming their courses shared their stories.

Georg Rieger talked about how adding one more activity to his Physics 101 classes made a huge difference. He’s been using peer instruction with i>Clickers for a while and noticed poor student success on the summative questions he asked after explaining a new concept. He realized students don’t understand a concept just because he told them about it, no matter how eloquent or enthusiastic he was. So he tried something new — he replaced his description with worksheets that guided the students through the concept. It didn’t take a whole lot longer for the students to complete the worksheets compared to listening to him but they had much greater success on the summative clicker questions. The students, he concluded, learn the concepts much better when they engage and generate the knowledge themselves. Nice.

Susan Allen talked about the lessons she learned in a large, 3rd-year oceanography class and how she could apply them in a small, 4th-year class. Gary Bradfield showed us a whole bunch of student-learning data he and my colleague Malin Hansen have collected in an ecology class (Malin’s summer job is to figure out what it all means.) Finally, Mark MacLean described his approach to working with the dozen or so instructors teaching an introductory Math course, only 3 of whom had any prior teaching experience. His breakthrough was writing “fresh sheets” (he made the analogy to a chef’s specials of the week) for the instructors that outlined the coming week’s learning goals, instructional materials, tips for teaching that content, and resources (including all the applicable questions in the textbook.) The instructors give the students the same fresh sheet, minus the instructional tips. [Note: these presentations will appear on the CWSEI shortly and I’ll link to them.]

Part 2: Posters

All of my STLF colleagues and I were encouraged to hang a poster about a project we’d been working on. Some faculty and grad students who had stories to share about science education also put up posters.

My poster was a timeline for a particular class in the introductory #astro101 course I work on. The concept being covered was the switch from the Ptolemaic (Earth-centered) Solar System to the Copernican (Sun-centered) Solar System. The instructor presented the Ptolemaic model, described how it worked, asked the students for to make a prediction based on the model (a prediction that does not match the observations, hence the need to change models.) The students didn’t get it. But he forged onto the Copernican model, explained how it worked, asked them to make a prediction (which is consistent with the observations, now). They didn’t get that either. About a minute after the class ended, the instructor looked at me and said, “Well that didn’t work, did it?” I suggested we take a Muligan, a CTRL-ALT-DEL, and do it again the next class. Only different this time. That was Monday. On Tuesday, we recreated the content switching from an instructor-centered lecture to a student-centered sequence of clicker questions and worksheets.  On Wednesday, we ran the “new” class. It took the same amount of time and the student success on the same prediction questions was off the chart! (Yes, they were the same questions. Yes, they could have remembered the answers. But I don’t think a change from 51% correct on Monday to 97% on Wednesday can be attributed entirely to memory.)

Perhaps the most interesting part of the poster, for me, was coming up with the title. The potential parallel between Earth/Sun-centered and instructor/student-centered caught my attention (h/t to @snowandscience for making the connection.) With the help of my tweeps, wrestled with the analogy, finally coming to a couple of conclusions. One, the instructor-centered class is like the Sun-centered Solar System (with the instructor as the Sun):

  • the instructor (Sun) sits front and center in complete control while “illuminating” the students (planets), especially the ones close by.
  • the planets have no influence on the Sun,…
  • very little interaction with each other,…
  • and no ability to move in different directions.

As I wrote on the poster, “the Copernican Revolution was  a triumph for science but not for science education.” I really couldn’t come up with a Solar System model for a student-centered classroom, where students are guided but have “agency” (thanks, Sandy), that is, the free-will, to choose to move (and explore) in their own directions. In the end, I came up with (yes, it’s a mouthful but someone stopped me later to compliment me specifically on the title)

Shifting to a Copernican model of the Solar System
by shifting away from a Copernican model of teaching

Part 3: Example class

When we were organizing the event, Sarah thought it would be interesting to get an actual instructor to present an actual “transformed” class, one that could highlight for the audience (especially the on-the-fence-about-not-lecturing instructors) what you can do in a student-centered classroom. I volunteered the astronomy instructor I was working with, and he agreed. So Harvey (and I) recreated a lecture he gave about blackbody radiation. I’d kept a log of what happened in class so we didn’t have to do much. In fact, the goal was to make it as authentic as possible. The class, both the original and the demo class, had a short pre-reading, peer instruction with clickers (h/t to Adrian at CTLT for loaning us a class set of clickers), the blackbody curves Lecture-Tutorial worksheet from Prather et al. (2008), and a demo with a pre-demo prediction question.

Totally rocked, both times. Both audiences were engaged, clicked their clickers, had active discussions with peers, did NOT get all the questions and prediction correct.

At the CWSEI event, we followed the demonstration with a long, question-and-answer “autopsy” of the class. Lots of great questions (and answers) from the full spectrum of audience members between novice and experienced instructors. Also some helpful questions (and answers) from Carl, who surprised us by coming back to Vancouver for the event.

Canadian Space Agency (CSA) or Agence spatiale canadienne (ASC) logo

To top it off, we made the class even more authentic by handing out a few Canadian Space Agency stickers to audience members who ask good questions, just like we do in the real #astro101 class. You should have seen the glee in their eyes. And the “demo” students went all metacognitive on us (as they did in the real class, eventually) and started telling Harvey and I who asked sticker-worthy questions!

Part 4: Peer instruction workshop

The last event of the day was a pair of workshops. One was about creating worksheets for use in class. The other, which I lead, was called “Effective Peer Instruction Using Clickers.” (I initially suggested, “Clicking it up to Level 2” but we soon switched to the better title.)  The goal was to help clicker-using instructors to take better advantage of peer instruction. So many times I’ve witnessed teachable moments lost because of poor clicker “choreography,” that is, conversations cut-off, or not even started, because of how the instructor presents the question or handles the votes, and other things. Oh, and crappy questions to start with.

I didn’t want this to be about clickers because there are certainly ways to do peer instruction without clickers. And I didn’t want it to be a technical presentation about how to hook an i>clicker receiver to your computer and how to use igrader to assign points.

Between attending Center of Astronomy Education peer instruction workshops myself, which follow the “situated apprentice” model described by Prather and Brissenden (2008), my conversations with @derekbruff and the #clicker community, and my own experience using and mentoring the use of clickers at UBC, I easily had enough material to fill a 90-minute workshop. My physics colleague @cynheiner did colour-commentary (“Watch how Peter presents the question. Did he read it out loud?…”) while I did a few model peer instruction episodes.

After these demonstrations, we carefully went through the choreography I was following, explaining the pros and cons. There was lots of great discussion about variations. Then the workshop turned to how to handle some common voting scenarios. Here’s one slide from the deck (that will be linked shortly.)

I’d planned on getting the workshop participants to get into small groups, create a question and then present it to the class. If we’d had another 30 minutes, we could have pulled that off. Between starting late (previous session went long) and it being late on a Friday afternoon, we cut off the workshop. Left them hanging, wanting to come back for Part II. Yeah, that’s what we were thinking…

End-of-Year Events

Sure, it’s hard work putting together a poster. And demo lecture. And workshop. But it was a very good for the sharing what the CWSEI is doing, especially the demo class. And I’ll be using the peer instruction workshop again. And it was a great way to celebrate a year’s work. And then move onto the next one.

Does your group hold an event like this? What do you find works?

Navigation