Category: astro 101

My brief encounter with iclicker2 ranking tasks

As I’ve mentioned before, the folks at i>clicker lent me a set of the new i>clicker2 clickers. I had a chance to try them out this week when I filled in for an “Astro 101” instructor. I sure learned a lot in that 50 minutes!

(image: Peter Newbury)

Just to refresh your memory, the i>clicker2 (or “ic2” as it’s also called, which is great because the “>” in “i>clicker2” is messing up some of my HTML) unit has the usual A, B, C, D, E buttons for submitting answers to multiple-choice questions. These new clickers (and receiver and software) also allow for numeric answers and alphanumeric answers. That last feature is particularly interesting because it allows instructors to ask ranking or chronological questions. In the old days, like last week, you could display 5 objects, scenarios or events and ask the student to rank them. But you have to adapt the answers because you have only 5 choices. Something like this:

Rank these [somethings] I, II, III, IV and V from [one end] to [the other]:

A) I, II, V, III, IV
B) II, I, IV, III, IV
C) IV, III, IV, I, II
D) III, I, II, IV, V
E) V, II, I, III, IV

These are killer questions for the students. What are they supposed to do? Work out the ranking on the side and then check that their ranking is in your list? What if their ranking isn’t there? Or game the question and work through each of the choices you give and say “yes” or “no”? There is so much needed to get the answer right besides understanding the concept.

That’s what’s so great about the ic2 alphanumeric mode. I asked this question about how the objects in our Galaxy appear to be moving relative to us:

The alphanumeric mode of the ic2 allows instructors to easily ask ranking tasks like this one about the rotation of the Galaxy.

(Allow me a brief astronomy lesson. At this point in writing this post, I think it’ll be important later. Oh well, can’t hurt, right?)

The stars in our Galaxy orbit around the center. The Galaxy isn’t solid, though. Each star moves along its own path, at its own speed. At this point in the term [psst! we’re setting this up so the students will appreciate what the observed, flat rotation curve means: dark matter] there is a clear pattern: the farther the star is from the center of the Galaxy, the slower its orbital speed. That means stars closer to the center than us are moving faster and will “pass us on the inside lane.” When we observe them, they’re moving away from us. Similarly, we’re moving faster than objects farther from the center than we are, so we’re catching up to the ones ahead of us. Before we pass them, we observe them getting closer to us. That means the answer to my ranking question is EDCAB. Notice that location C is the same distance from the center of the Galaxy as us so it’s moving at the same speed as us. Therefore, we’re not moving towards or away from C — it’s the location where we cross from approaching (blueshifted) to receeding (redshifted).

As usual, I displayed the question, gave the students time to think, and then opened the poll. Students submit a 5-character word like “ABCDE”. The ic2 receiver cycles through the top 3 answers so the instructor can see what the students are thinking without revealing the results to the students.

I saw that there was one popular answer with a couple of other, so I decided enough students got the question right that -pair-share wouldn’t be necessary and displayed the results:

Students' answers for the galaxy rotation ranking task. The first bar, EDCAB, is correct. But what do the others tell you about the students' grasp of the concept?

In hindsight, I think I jumped the gun on that because, and here’s what I’ve been trying to get to in this post, I was unprepared to analyze the results of the poll. I did think far enough ahead to write down the correct answer, EDCAB, in big letters on my lesson plan. But what do the other answers tell us the students’ grasp of the concept?

In a good, multiple-choice question, you know why each correct choice is correct (yes, there can be more one correct choice) and why each incorrect choice is incorrect. When a student selects an incorrect choice, you can diagnose which part of the concept they’ve missed. The agile instructor can get students to -pair-share to reveal, and hopefully correct, their misunderstanding.

I’m sure that agility is possible with ranking tasks. But I hadn’t anticipated it. So I did the best I could on the fly and said something like,

Good, many of you recognized that the objects farther from the center are moving slower, so we’re moving toward them. And away from the stars closer to the center than us.

[It was at this moment I realized I had no idea what the other answers meant!]

Uh, I notice almost everyone put location C at the middle of the list – good. It’s at the same distance and same speed as us, so we’re not moving away from or towards C.

Oh, and ABCDE? You must have ranked them in the opposite order, not the way I clumsily suggested in the question. [Which, you might notice, is not true. Oops.]

[And the other 15% who entered something else? Sorry, folks…]

Uh, okay then, let’s move on…

What am I getting at here? First, these ranking tasks are awesome. Every answer is valid. None of that “I hope my answer is on the list…” And there’s no short-circuiting the answer by giving the students 5 choices, risking them gaming the answer by working backwards. I know there are lots of Astro 101 instructors already using ranking tasks, probably because of the great collection of tasks available at the University of Nebraska-Lincoln, but using them in class typically means distributing worksheets, possibly collecting them, perhaps asking one of those “old-fashioned” ranking task clicker questions. All that hassle is gone with ic2.

But it’s going to take re-training on the part of the instructor to be prepared for the results. In principle, there are 5! = 120 different 5-character words the students can enter. Now, of course, you don’t have anticipate what each of the 119 incorrect answers mean. But here are my recommendations:

  1. Work out the ranking order ahead of time and write it down, in big letters, where you can see it. It might be easy to remember, “the right answer to this question is choice B” but it’s not easy to remember, “the correct ranking is EDCAB.”
  2. Work out the ranking if the students rank in the opposite order. That could be because they misread the question or the question wasn’t clear.  Or it could diagnose their misunderstanding. For example, if I’d asked them to rank the locations from “most-redshifted” to “most-blueshifted”, the opposite order could mean they’re mixing up red- and blue-shift.
  3. Think about the common mistakes students make on this question and work out the rankings. And write those down, along with the corresponding mistakes.
  4. Nothing like hindsight: set up the question so the answer isn’t just 1 swap away from ABCDE. If you had no idea what the answer was, wouldn’t you enter ABCDE?

I hope to try, and write about, some other types of questions with my collection of ic2 clickers. I’ve already tried a demo where students enter their predictions using the numeric mode. But that’s the subject for another post…

Do you use ranking tasks in your class, with ic2 or paper or something else, again? What advice can you offer that will help the instructor be more prepared and agile?

Peer instruction is worth the effort

Most blog posts, articles or books with a title like this would go on to describe the positive impact of peer instruction on student learning. I even write those kinds of posts, myself.

This one is different, though, because it’s not about peer instruction being worth the effort by (and for) the students. This one is about how it’s worth the effort by (and for) the instructor.

In my job with the Carl Wieman Science Education Initiative, I sometimes work closely with one instructor for an entire 4-month term, helping to transform a traditional (read, “lecture”) science classes into productive, learner-centered environments. One of the common features of these transformations is the introduction and then effective implementation of peer instruction. At UBC, we happen to use i>clickers to do facilitate this but the technology does not define the pedagogy.

Early in the transformation, my CWSEI colleagues and I have to convince the instructor that they should be using peer instruction. A common response is,

I hear that good clickers questions take soooo much time to prepare. I just don’ t have that time to spend.

So, is that true, or is it a common misconception that we need to dispel?

Here’s my honest answer: Yes, transforming your instructor-centered lectures into interactive, student-centered classes takes considerable effort. It feels just like teaching a new course using the previous instructor’s deck of ppt slides.

What about the second time you teach it, though?

A year ago, in September 2010, I was embedded in an introductory astronomy course. The instructor and I put in the effort, her a lot more than me, to transform the course. By December, we were exhausted. Today, one year later, she’s teaching the same course.

My, what a difference a year can make.

This morning I asked her to tell me about how much time she spends preparing her classes this term, compared to last year. We’re not talking about making up homework assignments or exams or answering email or debugging the course management system or… Just the time spent getting ready for class. This year she spends about 1 hour preparing for her 1-hour classes. That prep time consists of

  • a lot of re-paginating last year’s ppt decks because they’re not quite in sync. Today’s Class_6 is the end of one last year’s Class_5 plus the beginning of the last year’s Class_6 so it needs a new intro, reminders, learning goals slide.
  • she tweaks the peer instruction questions, perhaps based on feedback we got last time (students didn’t understand the question, no one chose a particular choice so find a better distractor, and so on). The “Astro 101” community is lucky to have a great collection of peer instruction questions at ClassAction. Many of these have options where you can select bigger, longer, faster, cooler to create isomorphic questions. It takes time to review those options and pick ones which best match the concept being covered.
  • like every instructor, she looks ahead to the next couple of classes to see what needs to be emphasized to prepare the students.

“And how,” I asked, “does that compare to last year?”

Between the two of us (I was part of the instructional team, recall) we probably spent 4-5 hours preparing each hour of class. In case you’ve lost the thread, let me repeat that:

Last year: 4-5 hours per hour in class.
This year: 1 hour.

“And do you spend those 3-4 hours working on other parts of the course?”

Nope. Those 3-4 hours per class times 3 classes per week equals about 10 hours a week are now used to do the other parts of being a professor.

Is incorporating peer instruction into classes worth the effort? Yes, absolutely. For both the students and the instructors.

Phases of the Moon

Understanding the phases of the Moon is one of just a handful of concepts that you’ll find in every introductory, general-education “Astro 101” course. “Understanding”, of course, is a terrible description of learning. We have a much more specific learning goal:

After this activity, you [the student] will be able to

  • use the geometry of the Sun, Earth and Moon to illustrate the phases of the Moon and to predict the Moon’s rise and set times
  • illustrate the geometry of the Sun, Earth and Moon during lunar and solar eclipses, and explain why there are not eclipses every month

Everyone who teaches moon phases, from K-16, has their own favourite approach and apparatus. We get 30-40 students for a 50-minute period in our lab, a time meant targeting concepts are better learned in a hands-on environment. Our activity is built around an remarkable, 10-second experience: Students hold a styrofoam ball at arm’s length in a darkened room with one, bright, central light source. They do a pirouette, watching the pattern of light and shadow on the “Moon”.  Ooohs. Aaaahs. Lightbulbs going off. Truly a golden moment.

This page contains materials for what we do for the other 49 minutes and 50 seconds of the lab.

Equipment

Each group of 3 students gets 2 styrofoam balls, one Earth and one Moon. As the picture shows, we divide the Moon in half and write “NEAR” and “FAR” on the hemispheres. On the Earth ball, we draw the Equator, meridians at 0, 90, 180, 270 degrees longitude (which are 6 hours of daily rotation apart) and dashed meridians on the 45’s (3 hours of rotation apart.) A small sticker represents the observer and the cardinal points help students remember which way to spin the Earth to mimic the daily rotation.

At the center of the lab sits “the Sun”. This is a really bright lightbulb (150 W or more) on an equipment stand. To prevent light from scattering off the floor and ceiling, we built aluminum foil “baffles” that sit above and below the light. They allow only thin disk of light to shine into the room.  The light bulb is set to the students’ shoulder-level so when they hold the Moon at arm’s length, the styrofoam ball naturally goes into the light.

Materials

Instructor’s Guide

After running this activity for several terms, we realized there is a lot for the teaching assistants to do and say to keep the activity running. Those instructions eventually found their way into this instructor’s guide.

Credit

Unless credit is given explicitly, all documents, graphics and images are licensed under a Creative Commons License Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.  This work is supported by the Carl Wieman Science Education Initiative.

Your feedback, comments, suggestions

If you use the materials here and find a alternate approach, tweak or extension, please share it by leaving a comment.  Thanks!

Navigation