Tag: research

Going over the exam

How often have you heard your fellow instructors lament,

I don’t know why I bother with comments on the exams or even handing them back – students don’t go over their exams to see where they what they got right and wrong, they just look at the mark and move on.

If you often say or think this, you might want to ask yourself, What’s their motivation for going over the exam, besides “It will help me learn…”? But that’s the topic for another post.

In the introductory gen-ed astronomy class I’m working on, we gave a midterm exam last week. We dutifully marked it which was simple because the midterm exam was multiple-choice answered on Scantron cards. And calculated the average. And fixed the scoring on a couple of questions where the question stem was ambiguous (when you say, “summer in the southern hemisphere, do you mean June or do you mean when it gets hot?”). And we moved on.

Hey, wait a minute! Isn’t that just what the students do — check the mark and move on?

Since I have the data, every student’s answer to every question, via the Scantron and already in Excel, I decided to “go over the exam” to try to learn from it.

(Psst: I just finished wringing some graphs out of Excel and I wanted to start writing this post before I got distracted by, er, life so I haven’t done the analysis yet. I can’t wait to see what I write below!)

Besides the average (23.1/35 questions or 66%) and standard deviation (5.3/35 or 15%), I created a histogram of the students’ choices for each question. Here is a selection of questions which, as you’ll see further below, are widespread on the good-to-bad scale.

Question 9: You photograph a region of the night sky in March, in September, and again the following March. The two March photographs look the same but the September photo shows 3 stars in different locations. Of these three stars, the one whose position shifts the most must be

A) farthest away
B) closest
C) receding from Earth most rapidly
D) approaching Earth most rapidly
E) the brightest one

Students' choices for Question 9. The correct answer is B.

Question 16: What is the shape of the shadow of the Earth, as seen projected onto the Moon, during a lunar eclipse?

A) always a full circle
B) part of a circle
C) a straight line
D) an ellipse
E) a lunar eclipse does not involve the shadow of the Earth

Students' choices for Question 16. The correct answer is B.

Question 25: On the vernal equinox, compare the number of daytime hours in 3 cities, one at the north pole, one at 45 degrees north latitude and one at the equator.

A) 0, 12, 24
B) 12, 18, 24
C) 12, 12, 12
D) 0, 12, 18
E) 18, 18, 18

Students' answers to Question 25. The correct answer is C.

How much can you learn from these histograms? Quite a bit. Question 9 is too easy and we should use our precious time to better evaluate the students’ knowledge. The “straight line” choice on Question 16 should be replaced with a better distractor – no one “fell for” that one.  I’m a bit alarmed that 5% of the students think that the Earth’s shadow has nothing to do with eclipses but then again, that’s only 1 in 20 (actually, 11 in 204 students – aren’t data great!)  We’re used to seeing these histograms because in class, we have frequent think-pair-share episodes using i>clickers and use the students’ vote to decide how to proceed. If these were first-vote distributions in a clicker question, we wouldn’t do Question 9 again but we’d definitely get them to pair and share for Question 16 and maybe even Question 25. As I’ve written elsewhere, a 70% “success rate” can mean only about 60% of the students chose the correct answer for the right reasons.

I decided to turn it up a notch by following some advice I got from Ed Prather at the Center for Astronomy Education. He and his colleagues analyze multiple-choice questions using the point-biserial correlation coefficient. I’ll admit it – I’m not a statistics guru, so I had to look that one up. Wikipedia helped a bit, so did  this article and Bardar et al. (2006). Normally, a correlation coefficient tells you how two variables are related. A favourite around Vancouver is the correlation between property crime and distance to the nearest Skytrain station (with all the correlation-causation arguments that go with it.) With point-biserial correlation, you can look for a relationship between students’ test scores and their success on a particular question (this is the “dichotomous variable” with only two values, 0 (wrong) and 1 (right).) It allows you to speculate on things like,

  • (for high correlation) “If they got this question, they probably did well on the entire exam.” In other words, that one question could be a litmus test for the entire test.
  • (for low correlation) “Anyone could have got this question right, regardless of whether they did well or poorly on the rest of the exam.” Maybe we should drop that question since it does nothing to discriminate or resolve the student’s level of understanding.

I cranked up my Excel worksheet to compute the coefficient, usually called ρpb or ρpbis:

where μ+ is the average test score for all students who got this particular questions correct, μx is the average test score for all students, σx is the standard deviation of all test scores, p is the fraction of students who got this question right and q=(1-p) is the fraction who got it wrong. You compute this coefficient for every question on the test. The key step in my Excel worksheet, after giving each student a 0 or 1 for each question they answered, was the AVERAGEIF function: for each question I computed

=AVERAGEIF(B$3:B$206,”=1″,$AL3:$AL206)

where, for example, Column B holds the 0 and 1 scores for Question 1 and Column AL holds the exam marks. This function takes the average of the exam scores only for those students (rows) who have got a “1” on Question 1. At last then, the point-biserial correlation coefficients for each of the 35 questions on the midterm, sorted from lowest to highest:

Point-biserial correlation coefficient for the 35 multiple-choice question in our astronomy midterm, sorted from lowest to highest. (Red) limits of very weak to strong (according to the APEX disserations article) and also the (green) "desirable" range of Bardar et al. are shown.

First of all, ooo shiney! I can’t stand the default graphics settings of Excel (and PowerPoint) but with some adjustments, you can produce a reasonable plot. Not that this in is perfect, but it’s not bad. Gotta work on the labels and a better way to represent the bands of “desirable”, “weak”, etc.

Back to going over the exam, how did the questions I included above fare? Question 9 has a weak, not desirable coefficient, just 0.21. That suggests anyone could get this question right (or equivalently, no could get this question right). It does nothing to discriminate or distinguish high-performing students from low-performing students. Question 16, with ρpb = 0.37 is in the desirable range – just hard enough to begin to separate the high- and low-performing students. Question 25 is one of the best on the exam, I think.

In case you’re wondering, Question 6 (with the second highest ρpb ) is a rather ugly calculation. It discriminated between high- and low-performing students but personally, I wouldn’t include it – doesn’t match the more conceptual learning goals IMHO.

I was pretty happy with this analysis (and my not-such-a-novice-anymore skills in Excel and statistics.) I should stopped there. But like a good scientist making sure every observation is consistent with the theory, I looked at Question 26, the one with the highest point-biserial correlation coefficient. I was shocked, alarmed even. The most discriminating question on the test was this?

Question 26: What is the phase of the Moon shown in this image?

A) waning crescent
B) waxing crescent
C) waning gibbous
D) waxing gibbous
E) third quarter

It’s waning gibbous, by the way, and 73% of the students knew it. That’s a lame, Bloom’s taxonomy Level 1, memorization question. Damn. To which my wise and mentoring colleague asked, “Well, what was the exam really testing, anyway?”

Alright, perhaps I didn’t get the result I wanted. But that’s not the point of science. Of this exercise.  I definitely learned a lot by “going over the exam”, about validating questions, Excel, statistics and WordPress. And perhaps made it easier for the next person, shoulders of giants and all that…

Constructing your own knowledge is not "edu-babble"

First, a disclosure: I’d love to pepper this posting with links to journal articles here, there and everywhere. But the truth is, if I try to do that, I’ll never get it written. If only I had a massive library of refs in my head like some of my colleagues. So here goes the “I’ll add refs later” version.

On February 8, the Vancouver Sun published a column by Michael Zwaagstra entitled “Purdue University study confronts edu-babble” (Hat-tip to @chrkennedy.)

<raised>hackles</raised>

The lead paragraph concludes

Instead of telling students what they need to learn, teachers should encourage them to construct their own understanding of the world around them. The progressive approach to education is far more useful to students than the mindless regurgitation of mere facts.”

A reasonable philosophy. One I agree with, in fact. And no, I didn’t forget to copy the opening quotation mark. It was omitted. Maybe that’s Vancouver Sun style. Or maybe it’s to hide the fact that this paragraph is a strawman about to knocked down by the author, who begins his actual column with

“Anyone involved in education knows these types of edu-babble statements are often heard in teacher-training institutions. Education professors continually push teachers to move away from traditional methods of instruction.”

The author goes on to cite a new study in Sciencexpress (20 Jan 2011) Science (11 Feb 2011) by Jeffrey D. Karpicke and Janell R. Blunt, “Retrieval Practice Produces More Learning than Elaborative Studying with Concept Mapping.” Let me describe that research first and then come back to how Zwaagstra presented it.

Karpicke and Blunt did quite a nice study comparing, among other things, the final test scores of four groups of students

  1. study-once: students studied the text in 1 study session
  2. repeated study: students studied the text in 4 consecutive study session
  3. elaborative studying with concept mapping: after instruction on how to create a concept map, students created concepts maps of the concepts in the text. This activity plays the role of “constructing their own knowledge” in the journal article and Zwaagstra’s newspaper column.
  4. retrieval practice: students studied the text in one study session, then practiced retrieval by trying to recall as much as they could. Then they restudied and recalled a second time. The authors made sure the students in this group and the concept mapping group had the same time-on-task.

When these learning activities were complete, all students wrote the same short-answer test which contained both “verbatim” questions testing knowledge stated in the text and “inference” questions that required students to assemble various facts. For both types of questions, the retrieval practice group scored the highest, followed by the repeated study, concept mapping and study once groups. In both types of questions, the retrieval practice scores were statistically significantly higher than other scores. The article goes on to describe how they replicated the study, with similar results.

Hmm, interesting result. I wonder… no, sorry, back to the Vancouver Sun article.

Fine. Studying helps students succeed on tests.No one would argue against that. And concept mapping certainly has its strengths but it is just one approach to “constructing your own understanding.”

Zwaagstra uses the Purdue result to support the practice of testing students regularly on content knowledge. No problem with that. And that Provinces which are abandoning standardized testing are falling prey to “anti-testing mantra”. Hmm, not sure about that. And that learner-centered instruction is “edu-babble”. Okay, that pissed me off:

I’m relieved to say I wasn’t the only one, based on the handful of RTs and replies I received from @cpm5280, @mcshanahan, @ScientificChick, @chrkennedy, @sparkandco and @derekbruff, all tweeps whose opinions I value.

Right – everyone is entitled to their opinion. Zwaagstra is sharing his, just like I’m sharing mine. But wait, this isn’t an opinion piece – it’s a newspaper report:

Well, in fact, a friend tells me the online Vancouver Sun just tacks “Vancouver Sun” credentials onto the author. At the bottom of the article, we discover Mr. Zwaagstra is a research fellow with the Frontier Centre for Public Policy, a “think-tank” [their quotes] supporting Canada’s prairie provinces. So this is not an objective piece of journalism about new result in education research. It’s an opinion piece written on behalf of the Frontier Centre to support their philosophy. The Vancouver Sun should have made that a lot clearer. And did they really have to use the most sensational word in the entire story, “edu-babble”, in the headline? How about a little less tabloid next time, huh? In hindsight, maybe that pissed me off just as much as Zwaagstra’s lampooning of decades of education research and practice.

So, I’ll stay vigilante to stories which misrepresent science. But in the end, I’ll also follow Derek Bruff’s advice:

Evidence of Learning in Astronomy

Throughout the Sep-Dec, 2010 term, I worked with an astronomy instructor to create a more learner-centered classroom. As I described elsewhere, we spent just over one third of the instructional time on interactive activities: think-pair-share using clickers, lecture-tutorial worksheets, ranking tasks and a couple of predict-observe demonstrations. And it resulted in a learning gain of 0.42 on a particular assessment tool, the LSCI. That means the students learned 42% of the concepts they didn’t already know at the beginning of the term. That’s not bad — we’re pretty happy with it.

So, students can learn in a learner-centered classroom. But maybe they can learn in a more traditional classroom, too.

We don’t have LSCI data from previous years (note to self: think ahead! Collect standardized assessment data on classes before attempting any transformations!) To investigate if transforming the instruction class makes any difference, we re-used, word-for-word, a handful of questions from the same instructor’s 2008 Final Exam (pre-transformation) on this term’s Final Exam: 10 multiple-choice questions and 4 longer-answer questions. We made sure the questions assessed the concepts we covered in 2010 in sync with the learning goals.

I extracted students’ marks on these 14 questions from the 2010 exams (N=144). And from the old, 2008 exams (N=107), being sure to re-mark the longer-answer questions using the 2010 rubric. (Note to self #2: buy aspirin on the way home.)

What were we hoping for? Significant increase in student success in the transformed,  learner-centered course.

How I wish I could report that’s what we found. But I can’t. Because we didn’t. Here are the results:

Students scores on questions used on both the 2008 and 2010 Final Exams in the introductory astronomy course, ASTR 311. Error bars are standard error of the mean.

There is no significant difference in student success on the 10 multiple-choice questions. Their scores on the entire exams are also the same, though the exams are not identical, only about 1/4 of the 2008 exam is re-used in 2010. Nevertheless, these nearly identical Exam scores suggest the populations of students in 2008 and 2010 are about the same.  There are are differences in the 4 long-answer questions: the 2008 students did better than their 2010 counterparts.

Two things jumped out at me

  1. Why did they do so much better on the long-answer questions? I said we used the same marking rubric but we didn’t use the same markers. A team of teaching assistants marked the 2010 exams; I(re) marked the 2008 exams. The long-answer questions are work 10 marks, so a little more (or less) generosity in marking – half a mark here, half a mark there – could make a difference. I really need to get the same TAs to remark the 2008 exams. Yeah, like that’s gonna happen voluntarily. Hmm, unless there’s potential for a publication in the AER
  2. Why, oh why, didn’t they do better this year? Even if we omit the suspicious long-answer marks and look only at the multiple-choice questions, there is no difference. Did we fail?

No, it’s not a failure. The instructor reduced her lecturing time by 35%. We asked the students to spend 35% of their time teaching themselves. And it did no harm. The instructor enjoyed this class much than in 2008. We had consistent 75% attendance (it was much lower by the end of the term in 2008) and students were engaged in material each and every class. I think that’s a success.

The next step in this experiment is to look for retention. There is evidence in physics (see Pollock & Chasteen, “Longer term impacts…” here) that students who engage in material and generate their own knowledge retain the material longer. With that in mind, I hope to re-test these 2010 students with LSCI in about 3 months, after they’ve had a term to forget everything. Or maybe not…

Navigation