Tag: technology

My brief encounter with iclicker2 ranking tasks

As I’ve mentioned before, the folks at i>clicker lent me a set of the new i>clicker2 clickers. I had a chance to try them out this week when I filled in for an “Astro 101” instructor. I sure learned a lot in that 50 minutes!

(image: Peter Newbury)

Just to refresh your memory, the i>clicker2 (or “ic2” as it’s also called, which is great because the “>” in “i>clicker2” is messing up some of my HTML) unit has the usual A, B, C, D, E buttons for submitting answers to multiple-choice questions. These new clickers (and receiver and software) also allow for numeric answers and alphanumeric answers. That last feature is particularly interesting because it allows instructors to ask ranking or chronological questions. In the old days, like last week, you could display 5 objects, scenarios or events and ask the student to rank them. But you have to adapt the answers because you have only 5 choices. Something like this:

Rank these [somethings] I, II, III, IV and V from [one end] to [the other]:

A) I, II, V, III, IV
B) II, I, IV, III, IV
C) IV, III, IV, I, II
D) III, I, II, IV, V
E) V, II, I, III, IV

These are killer questions for the students. What are they supposed to do? Work out the ranking on the side and then check that their ranking is in your list? What if their ranking isn’t there? Or game the question and work through each of the choices you give and say “yes” or “no”? There is so much needed to get the answer right besides understanding the concept.

That’s what’s so great about the ic2 alphanumeric mode. I asked this question about how the objects in our Galaxy appear to be moving relative to us:

The alphanumeric mode of the ic2 allows instructors to easily ask ranking tasks like this one about the rotation of the Galaxy.

(Allow me a brief astronomy lesson. At this point in writing this post, I think it’ll be important later. Oh well, can’t hurt, right?)

The stars in our Galaxy orbit around the center. The Galaxy isn’t solid, though. Each star moves along its own path, at its own speed. At this point in the term [psst! we’re setting this up so the students will appreciate what the observed, flat rotation curve means: dark matter] there is a clear pattern: the farther the star is from the center of the Galaxy, the slower its orbital speed. That means stars closer to the center than us are moving faster and will “pass us on the inside lane.” When we observe them, they’re moving away from us. Similarly, we’re moving faster than objects farther from the center than we are, so we’re catching up to the ones ahead of us. Before we pass them, we observe them getting closer to us. That means the answer to my ranking question is EDCAB. Notice that location C is the same distance from the center of the Galaxy as us so it’s moving at the same speed as us. Therefore, we’re not moving towards or away from C — it’s the location where we cross from approaching (blueshifted) to receeding (redshifted).

As usual, I displayed the question, gave the students time to think, and then opened the poll. Students submit a 5-character word like “ABCDE”. The ic2 receiver cycles through the top 3 answers so the instructor can see what the students are thinking without revealing the results to the students.

I saw that there was one popular answer with a couple of other, so I decided enough students got the question right that -pair-share wouldn’t be necessary and displayed the results:

Students' answers for the galaxy rotation ranking task. The first bar, EDCAB, is correct. But what do the others tell you about the students' grasp of the concept?

In hindsight, I think I jumped the gun on that because, and here’s what I’ve been trying to get to in this post, I was unprepared to analyze the results of the poll. I did think far enough ahead to write down the correct answer, EDCAB, in big letters on my lesson plan. But what do the other answers tell us the students’ grasp of the concept?

In a good, multiple-choice question, you know why each correct choice is correct (yes, there can be more one correct choice) and why each incorrect choice is incorrect. When a student selects an incorrect choice, you can diagnose which part of the concept they’ve missed. The agile instructor can get students to -pair-share to reveal, and hopefully correct, their misunderstanding.

I’m sure that agility is possible with ranking tasks. But I hadn’t anticipated it. So I did the best I could on the fly and said something like,

Good, many of you recognized that the objects farther from the center are moving slower, so we’re moving toward them. And away from the stars closer to the center than us.

[It was at this moment I realized I had no idea what the other answers meant!]

Uh, I notice almost everyone put location C at the middle of the list – good. It’s at the same distance and same speed as us, so we’re not moving away from or towards C.

Oh, and ABCDE? You must have ranked them in the opposite order, not the way I clumsily suggested in the question. [Which, you might notice, is not true. Oops.]

[And the other 15% who entered something else? Sorry, folks…]

Uh, okay then, let’s move on…

What am I getting at here? First, these ranking tasks are awesome. Every answer is valid. None of that “I hope my answer is on the list…” And there’s no short-circuiting the answer by giving the students 5 choices, risking them gaming the answer by working backwards. I know there are lots of Astro 101 instructors already using ranking tasks, probably because of the great collection of tasks available at the University of Nebraska-Lincoln, but using them in class typically means distributing worksheets, possibly collecting them, perhaps asking one of those “old-fashioned” ranking task clicker questions. All that hassle is gone with ic2.

But it’s going to take re-training on the part of the instructor to be prepared for the results. In principle, there are 5! = 120 different 5-character words the students can enter. Now, of course, you don’t have anticipate what each of the 119 incorrect answers mean. But here are my recommendations:

  1. Work out the ranking order ahead of time and write it down, in big letters, where you can see it. It might be easy to remember, “the right answer to this question is choice B” but it’s not easy to remember, “the correct ranking is EDCAB.”
  2. Work out the ranking if the students rank in the opposite order. That could be because they misread the question or the question wasn’t clear.  Or it could diagnose their misunderstanding. For example, if I’d asked them to rank the locations from “most-redshifted” to “most-blueshifted”, the opposite order could mean they’re mixing up red- and blue-shift.
  3. Think about the common mistakes students make on this question and work out the rankings. And write those down, along with the corresponding mistakes.
  4. Nothing like hindsight: set up the question so the answer isn’t just 1 swap away from ABCDE. If you had no idea what the answer was, wouldn’t you enter ABCDE?

I hope to try, and write about, some other types of questions with my collection of ic2 clickers. I’ve already tried a demo where students enter their predictions using the numeric mode. But that’s the subject for another post…

Do you use ranking tasks in your class, with ic2 or paper or something else, again? What advice can you offer that will help the instructor be more prepared and agile?

Effective professional development, Take 1

The other day, I participated in a webinar run by Stephanie Chasteen (@sciencegeekgirl on Twitter. If you don’t follow her, you should.) It was called, “Teaching faculty about effective clicker use” and the goals was to help us plan and carry out meetings where we train faculty members to use peer instruction and clickers. Did you get that subtle difference: it was not about how to use clickers (though Stephanie can teach you that, too.) Rather, this webinar was aimed at instructional support people tasked with training their colleagues how to use peer instruction. This was a train the trainers webinar. And it was right up my alley because I’m learning to do that.

And if you think that’s getting meta-, just you wait…

In the midst of reminding us about peer instruction, Stephanie listed characteristics of effective professional development. She gave us the bold words; the interpretation in mine:

  • collaborative: it’s about sharing knowledge, experiences, ideas, expertise
  • active: we need to do something, not just sit and listen (or not!)
  • discipline-oriented: If we want to be able to share, we need some common background. I want to understand what you’re talking about. And I hope you give a damn about what I’m talking about. Coming from the same discipline, like physics or astronomy or biology, is a good start.
  • instructor-driven: I take this to mean “facilitated”. That is, there’s someone in charge who drives the activity forward.
  • respectful: So open to interpretation. Here’s my take: everyone in the room should have the opportunity to contribute. And not via the approach, “well if you’ve got something to say, speak up, dammit!” It takes self-confidence and familiarity and…Okay, it takes guts to interrupt a colleague or a conversation to interject your own opinion. Relying on people to do that does not respect their expertise or the time they’ve invested by coming to the meeting.
  • research-based: One of the pillars of the Carl Wieman Science Education Initiative (CWSEI) that I’m part of at UBC, and the Science Education Initiative at the University of Colorado where Stephanie comes from, is a commitment to research-based instructional strategies. We care about the science of teaching and learning.
  • sustained over time: We’d never expect our students to learn concepts after one exposure to new material. That’s why we give pre-reading and lectures and peer instruction and homework and midterms and…So we shouldn’t expect instructors to transform their teaching styles after one session of training. It requires review and feedback and follow-up workshops and…

Alright, time to switch to another stream for a moment. They’ll cross in a paragraph or two.

(image: Peter Newbury)

I’ve got a big box of shiny new i>clicker2 clickers to try out. I’m pretty excited. I’m also pretty sure the first thing instructors will say is, “What’s with all the new buttons? I thought these things were supposed to be simple! Damn technology being shoved down our [grumble] [grumble] [grumble]” I want to be able reply

Yes, there are more buttons on the i>clicker2. But let me show you an amazing clicker question you can use in your [insert discipline here] classroom…

 

Good plan. Okay, let’s see: Clickers? Check. Amazing clicker questions? D’oh!

We use a lot of peer instruction here at UBC and there are CWSEI support people like me in Math, Chemistry, Biology, Statistics, Earth and Ocean Sciences, Computer Science. If anyone can brainstorm a few good questions, it’s this crew. And guess what? We get together for 90-minute meetings every week.

Can you feel the streams are coming together. Just one more to add:

My CWSEI colleagues and I frequently meet with instructors and other faculty members. We’re dance a delicate dance between telling instructors what to do, drawing out their good and bad experiences, getting them to discover for themselves what could work, (psst: making them think they thought of it themselves). Their time is valuable so when we meet, we need to get things done. We need to run short, effective episodes of professional development. It’s not easy. If only there was a way to practice…

A-ha! Our weekly meetings should be effective professional development led by one of us getting some practice at facilitating. The streams have crossed. I’ll run the next meeting following Stephanie’s advice, modeling Stephanie’s advice, to gather questions so I will be able run an effective workshop on taking advantage of the new features of the i>clicker2. It’s a meta-meeting. Or a meta-meta-meeting?

It’s not like I made any of this up. Or I couldn’t find it if I talked with some people whose job is professional development. Well, I guess I did kind of talk with Stephanie. But there’s a lot to be said for figuring it out for yourself. Or at least starting to figure it out for yourself, and failing, and then recognizing and appreciating what the expert has to say.

And you’ve read enough for now. Watch for another post about how it went.

Click it up a notch with i>clicker2

As some of may have heard, i>clicker is coming out with new hardware. UBC Classroom Services is already installing the new i>clicker2 receiver in many classrooms. I’ve been working with them to design a holder that mounts the receiver on the desktop so the receiver is 1) secure 2) visible.

iclicker2 receiver and UBC-designed mount
New i>clicker2 receiver mounted on classroom desktop with a base designed at UBC. The base swivels 359 degrees so the instructor can see the distribution from either side of the podium. That's a USB port on the base, where you plug in the chip with the i>clicker software and your class data. (Photo: Peter Newbury)
The i>clicker2 has more options, allowing for alpha-numeric responses in addition to the usual A thru E choices. (Image from iclicker.com)

This new receiver is fully compatible with the current i>clicker clickers, the simple, white A-E clickers we know and love.

No surprise, along with the new receiver comes a new i>clicker2 clicker.

Hold it, hold it! Don’t have a fit! Yes, there are more buttons and that seems to explicitly contradict i>clicker’s advertised simplicity. The first time I saw it, yes, I, er, had a fit.

However, I’ve since had a long chat with my colleague Roger Freedman (follow him on Twitter @RogerFreedman ) at UCSB. He’s a great educator, textbook author, avid clicker user, and i>clicker2 guinea pig. In his opinion, which I sincerely trust, i>clicker2 opens up new and powerful avenues for peer instruction. His favourite is ranking tasks which can be implemented without those awkward clicker questions with choices like A) 1>2=3>4 B) 1=2>3=4 …

Here’s the thing(s):

  • instructors could use the i>clicker2 to revert back to ineffective peer instruction questions
  • i>clicker2 opens up new options for peer instruction
  • they’re coming (though UBC has not declared when)

Conclusion Let’s be pro-active and prepared to train instructors when the i>clicker2 arrives.

The first step (after finishing your fit) is figuring out what the new clicker can do. And that’s the reason for this post. In 30 minutes – er, make that 11 minutes – I’ll be heading to a demo. The rest of this post will be written shortly…

(Image CC Pedro Moura Pinheiro on flickr)

It’s 3 hours later. I’m E X C I T E D! The demo with Roberto and Shannon was, well, they had a wide spectrum of audience members, from never held a clicker before to experienced users. I had a great chat with them afterwards, though. Details below, but first, some nice features of the i>clicker2 unit:

Click to enlarge. (Images: Peter Newbury)
  • (left) When you turn on the i>clicker2, it flashes the ID number. No more problems with the sticker getting rubbed off (though Roberto assures us they have better stickers now.)
  • (center) There are only 2 batteries (but still 200 hrs of use). See those 2 little sticky-outty things at the top? They’re rubber feet to stop the clicker from sliding off the desk. Nice touch.
  • (right) There’s a metal post for a lanyard. Good idea.

I won’t go into all the details about the features of the software. There are lots.   You can take the tour at iclicker.com.

The hard part

Those of us who have been using i>clickers for peer instruction have gotten pretty ingenious about asking good, discussion-promoting questions even though we’re limited to choices A–E. It’s going to take some thinking and discussion to figure out how to take advantage of the expanded capabilities of the i>clicker2. Ranking tasks are a great start: students can easily enter a string of letters like BCDEA to rank items. It’s going to take some testing. Which leads to…

The great part

Roberto and Shannon are going to lend me a class set of i>clicker2’s for the term! Eighty clickers to try out in the classes I work in. Suh-weet!

I was chatting with my friend Warren (@warcode) afterwards. He said, “When you asked Roberto to show you what i>clicker2 can do that i>clicker can’t, his response was, essentially, ‘Here’s a set clickers. You tell us.’ ”

Challenge accepted! Stay tuned!

 

Navigation