Tag: clickers

A misconception about extrasolar planets

A couple of weeks ago in the introductory “Astro 101” class I work in, the instructor and I confirmed that many students hold a certain misconception. I was, still am, pretty excited about this little discovery in astronomy education. If my conversations over the following few days had turned out differently, I probably would be writing it for publication in the Astronomy Education Review. Maybe I still will. But for now, here’s my story.

Our search for life in the Universe and the flood of results from the Kepler Mission have made the discovery of extrasolar planets an exciting and relevant topic for introductory “Astro 101” courses and presentations to the general public.  Instructors, students, presenters and audiences latch onto “the transit method” of detection because it is so intuitive: when an extrasolar planet passes between us and its star, the planet temporarily blocks some star light and we detect a dip in the brightness of the star. The period and shape of the dips in the record of the star’s brightness encode the characteristics of the planet.

When an extrasolar planet passes between us and its star (when it “transits” the star) we detect a dip in the brightness of the star. (Kepler/NASA image)

Our students do a nice 50-minute, hands-on lab about how to decode these “light curves” which I hope to share at the ASP 2011 conference (#ASP2011 on Twitter) in July [Update: Exploring Transiting Extrasolar Planets in your Astronomy Lab, Classroom, or Public Presentation]. In a class following this lab, the instructor posed the following think-pair-share clicker question. We wanted to assess if the students remembered that the size of the dip is proportional to the area of the star blocked by the planet’s disk, which scales as the square of the diameters:

Clicker question to assess the students’ grasp of the transit method of detecting extrasolar planets.

The bars in this histogram record the number of students who chose (from left to right) A to E:

Students’ responses for (left to right) choices A to E to extrasolar planets clicker question.

About 60% of the class chose answers (C and E) with a 1% drop in brightness, the correct drop, and about 40% chose answers B and D with a 10% drop. This second group didn’t remember the “proportional to area” property. So, not stunning results, certainly a good candidate for pairing and sharing.

The misconception

What is stunning, though, and the source of my excitement, is that 97% of the class feels you see a black spot moving across the star. Which is not true! We only detect the drop in the brightness of the star. We can’t even see the disk of the star, let alone a tiny black spot!

Okay, okay before you jump to the students’ defence, let me (with the help of my great CAPER Team colleagues) jump to the students’ defence:

    1. The question says, “…by observing it pass in front of the distant star.” Of course the students are going to say we see a dark spot – that’s what we just told them! Perhaps I should be worried about the 3% who didn’t read the question properly.
    2. The question is vague about what we mean by “size.” Diameter? Area? Volume? Mass? “The star’s diameter is 10 times bigger than the planet’s diameter” is a much better question stem.
    3. My colleague Aaron Price points out

Astronomers may not see a “dot” crossing the star right now, but they can see something comparable. Through speckle imaging, radial topography and optical interferometry we have been able to see starspots for decades. CHARA’s recent direct observations of a disk of dust moving across epsilon Aurigae shows what is being done right now in interferometric direct imaging. I predict within 10 years we’ll have our first direct image of a “dot” in transit across another star.

  1. Aaron, Kendra Sibbernsen and I all agree that the word “see” in “What would you see?” is too vague. The question I wanted to ask should have used “observe” or “detect”. Kendra suggested we write “A) a dark spot visibly passing in front of the star” and perhaps following up the question with this one to poke explicitly at the potential misconception:

With current technology, can astronomers resolve the dark spot of an extrasolar planet on the disk of a star when it is in transit? (T/F)

Was there a misconception?

Did the students reveal a misconception about transiting extrasolar planets. Nope, not at all. It’s not like they took the information we gave them, mixed it with their own preconceived notions and produced an incorrect explanation. Instead, they answered with the information they’d been given.

A teachable moment

It seems that we’re not being careful enough in how we present the phenomenon of transiting extrasolar planets. But as it turns out, this is a teachable moment about creating models to help us visualize something (currently) beyond our reach. We observe variations in the brightness of the star. We then create a model in our mind’s eye — a large, bright disk for the star and a small, dark disk for the planet — that helps us explain the observations.

This is a very nice model, in fact, because it can be extended to explain other, more subtle aspects of transiting extrasolar planets, like a theoretical bump, not dip, in the brightness, when the planet is passing behind the star and we  see detect extra starlight reflected off the planet. The models also explains these beautiful Rossiter-McLaughlin wiggles in the star’s radial velocity (Doppler shift) curve as the extrasolar planet blocks first the side of the star spinning towards us and then the side spinning away from us.

These wiggles in the radial velocity curve are caused by the Rossiter-McLaughlin effect (from Winn, Johnson et al. 2006, ApJL)

Want to help?

If you’re teaching astronomy, you can help us by asking them this version, written by Kendra, and letting me know what happens.

An extrasolar planet passes in front of its star as seen from the Earth. The star’s diameter is 10 times bigger than the planet’s diameter. What do astronomers observe when this happens?

A)  a dark spot visibly passing across the disk of the star
B)  a 10% dip in the brightness of the star
C)  a 1% dip in the brightness of the star
D) A and B
E) A and C

In conclusion

I don’t think this qualifies as a misconception, not like the belief that the seasons are caused by changes in the distance between the Earth and the Sun. We’re just need to be more careful when we teach our students about extrasolar planets. And in more-carefully explaining the dips in the light curve, we have an opportunity to discuss the advantages and disadvantages of using models to visualize phenomena beyond our current abilities. That’s a win-win situation.

Thanks to my CAPER Team colleagues Aaron, Kendra and Donna Governor for the thoughtful conversations and the many #astro101 tweeps womanastronomer, erinleeryan, uoftastro, jossives, shanilv and more who were excited for me, and then patient with me, as I figured this out.

Another day of agile teaching

The prof I’m working with in our introductory #astro101 class at UBC surprised me today. I thought he was sabotaging a teachable moment when in fact, he pulled one of the most “agile” moves he’s made yet. Here’s the story:

Today is March 21, 2011, the first full day of Spring. The vernal equinox occurred yesterday, March 20 at 4:21 PDT. The instructor, let’s call him H, started today’s class with a clicker question:

The correct answer is A) but I fully expected a bunch of students to vote B), confusing the “going North” and “going South” for the Sun’s motion along the ecliptic.

The students thought, then voted. H looked at the results and said (I’m paraphrasing from memory),

The correct answer is A. 70% of you said that…

Oh, no, I thought to myself. He just gave away the answer and the success rate – only 70%, not terrific – and totally short-circuited the teachable moment that comes via peer instruction.

That thought took about 1 second, of course, so it was all over by the time H continued with

…Very few of you said B, C, or D and 30% said E. Let me show you one slide and then I’ll come back to the super moon.

The "super Moon" as seen from Vancouver. (Credit @gmarkham, used with permission.)

You see, there was another event this past weekend. The full Moon occurred near perigee, the point in the Moon’s orbit around the Earth when it is closest. This means we had a full Moon, closer than usual, so it appeared bigger. Super, even. Oh, and it was.

So, here I was, getting alarmed that H was missing the opportunity for the students who voted A) to convince the students who voted B) to change their answers. But that’s not what happened at all. Hardly anyone voted B. They either knew the right answer A) or were more interested in the astronomy-in-real-life super Moon event. And H agilely, er, with great agility, confirmed the correct answer and followed up with an something 30% of the students cared about. He talked about the full Moon, how it was 14% bigger and 29% brighter. Not twice as big – don’t believe everything you hear on TV. That’s slightly bigger and closer than usual but not much. And no, the super Moon did not cause the earthquake in Japan.

Wow. I was impressed. He had the whole thing planned out but tailored his response based on theirs. Cool.

What about you? What teaching have you done, witnessed or experienced that shows agility?

Going over the exam

How often have you heard your fellow instructors lament,

I don’t know why I bother with comments on the exams or even handing them back – students don’t go over their exams to see where they what they got right and wrong, they just look at the mark and move on.

If you often say or think this, you might want to ask yourself, What’s their motivation for going over the exam, besides “It will help me learn…”? But that’s the topic for another post.

In the introductory gen-ed astronomy class I’m working on, we gave a midterm exam last week. We dutifully marked it which was simple because the midterm exam was multiple-choice answered on Scantron cards. And calculated the average. And fixed the scoring on a couple of questions where the question stem was ambiguous (when you say, “summer in the southern hemisphere, do you mean June or do you mean when it gets hot?”). And we moved on.

Hey, wait a minute! Isn’t that just what the students do — check the mark and move on?

Since I have the data, every student’s answer to every question, via the Scantron and already in Excel, I decided to “go over the exam” to try to learn from it.

(Psst: I just finished wringing some graphs out of Excel and I wanted to start writing this post before I got distracted by, er, life so I haven’t done the analysis yet. I can’t wait to see what I write below!)

Besides the average (23.1/35 questions or 66%) and standard deviation (5.3/35 or 15%), I created a histogram of the students’ choices for each question. Here is a selection of questions which, as you’ll see further below, are widespread on the good-to-bad scale.

Question 9: You photograph a region of the night sky in March, in September, and again the following March. The two March photographs look the same but the September photo shows 3 stars in different locations. Of these three stars, the one whose position shifts the most must be

A) farthest away
B) closest
C) receding from Earth most rapidly
D) approaching Earth most rapidly
E) the brightest one

Students' choices for Question 9. The correct answer is B.

Question 16: What is the shape of the shadow of the Earth, as seen projected onto the Moon, during a lunar eclipse?

A) always a full circle
B) part of a circle
C) a straight line
D) an ellipse
E) a lunar eclipse does not involve the shadow of the Earth

Students' choices for Question 16. The correct answer is B.

Question 25: On the vernal equinox, compare the number of daytime hours in 3 cities, one at the north pole, one at 45 degrees north latitude and one at the equator.

A) 0, 12, 24
B) 12, 18, 24
C) 12, 12, 12
D) 0, 12, 18
E) 18, 18, 18

Students' answers to Question 25. The correct answer is C.

How much can you learn from these histograms? Quite a bit. Question 9 is too easy and we should use our precious time to better evaluate the students’ knowledge. The “straight line” choice on Question 16 should be replaced with a better distractor – no one “fell for” that one.  I’m a bit alarmed that 5% of the students think that the Earth’s shadow has nothing to do with eclipses but then again, that’s only 1 in 20 (actually, 11 in 204 students – aren’t data great!)  We’re used to seeing these histograms because in class, we have frequent think-pair-share episodes using i>clickers and use the students’ vote to decide how to proceed. If these were first-vote distributions in a clicker question, we wouldn’t do Question 9 again but we’d definitely get them to pair and share for Question 16 and maybe even Question 25. As I’ve written elsewhere, a 70% “success rate” can mean only about 60% of the students chose the correct answer for the right reasons.

I decided to turn it up a notch by following some advice I got from Ed Prather at the Center for Astronomy Education. He and his colleagues analyze multiple-choice questions using the point-biserial correlation coefficient. I’ll admit it – I’m not a statistics guru, so I had to look that one up. Wikipedia helped a bit, so did  this article and Bardar et al. (2006). Normally, a correlation coefficient tells you how two variables are related. A favourite around Vancouver is the correlation between property crime and distance to the nearest Skytrain station (with all the correlation-causation arguments that go with it.) With point-biserial correlation, you can look for a relationship between students’ test scores and their success on a particular question (this is the “dichotomous variable” with only two values, 0 (wrong) and 1 (right).) It allows you to speculate on things like,

  • (for high correlation) “If they got this question, they probably did well on the entire exam.” In other words, that one question could be a litmus test for the entire test.
  • (for low correlation) “Anyone could have got this question right, regardless of whether they did well or poorly on the rest of the exam.” Maybe we should drop that question since it does nothing to discriminate or resolve the student’s level of understanding.

I cranked up my Excel worksheet to compute the coefficient, usually called ρpb or ρpbis:

where μ+ is the average test score for all students who got this particular questions correct, μx is the average test score for all students, σx is the standard deviation of all test scores, p is the fraction of students who got this question right and q=(1-p) is the fraction who got it wrong. You compute this coefficient for every question on the test. The key step in my Excel worksheet, after giving each student a 0 or 1 for each question they answered, was the AVERAGEIF function: for each question I computed

=AVERAGEIF(B$3:B$206,”=1″,$AL3:$AL206)

where, for example, Column B holds the 0 and 1 scores for Question 1 and Column AL holds the exam marks. This function takes the average of the exam scores only for those students (rows) who have got a “1” on Question 1. At last then, the point-biserial correlation coefficients for each of the 35 questions on the midterm, sorted from lowest to highest:

Point-biserial correlation coefficient for the 35 multiple-choice question in our astronomy midterm, sorted from lowest to highest. (Red) limits of very weak to strong (according to the APEX disserations article) and also the (green) "desirable" range of Bardar et al. are shown.

First of all, ooo shiney! I can’t stand the default graphics settings of Excel (and PowerPoint) but with some adjustments, you can produce a reasonable plot. Not that this in is perfect, but it’s not bad. Gotta work on the labels and a better way to represent the bands of “desirable”, “weak”, etc.

Back to going over the exam, how did the questions I included above fare? Question 9 has a weak, not desirable coefficient, just 0.21. That suggests anyone could get this question right (or equivalently, no could get this question right). It does nothing to discriminate or distinguish high-performing students from low-performing students. Question 16, with ρpb = 0.37 is in the desirable range – just hard enough to begin to separate the high- and low-performing students. Question 25 is one of the best on the exam, I think.

In case you’re wondering, Question 6 (with the second highest ρpb ) is a rather ugly calculation. It discriminated between high- and low-performing students but personally, I wouldn’t include it – doesn’t match the more conceptual learning goals IMHO.

I was pretty happy with this analysis (and my not-such-a-novice-anymore skills in Excel and statistics.) I should stopped there. But like a good scientist making sure every observation is consistent with the theory, I looked at Question 26, the one with the highest point-biserial correlation coefficient. I was shocked, alarmed even. The most discriminating question on the test was this?

Question 26: What is the phase of the Moon shown in this image?

A) waning crescent
B) waxing crescent
C) waning gibbous
D) waxing gibbous
E) third quarter

It’s waning gibbous, by the way, and 73% of the students knew it. That’s a lame, Bloom’s taxonomy Level 1, memorization question. Damn. To which my wise and mentoring colleague asked, “Well, what was the exam really testing, anyway?”

Alright, perhaps I didn’t get the result I wanted. But that’s not the point of science. Of this exercise.  I definitely learned a lot by “going over the exam”, about validating questions, Excel, statistics and WordPress. And perhaps made it easier for the next person, shoulders of giants and all that…

Navigation