Category: research

Evidence of Learning in Astronomy

Throughout the Sep-Dec, 2010 term, I worked with an astronomy instructor to create a more learner-centered classroom. As I described elsewhere, we spent just over one third of the instructional time on interactive activities: think-pair-share using clickers, lecture-tutorial worksheets, ranking tasks and a couple of predict-observe demonstrations. And it resulted in a learning gain of 0.42 on a particular assessment tool, the LSCI. That means the students learned 42% of the concepts they didn’t already know at the beginning of the term. That’s not bad — we’re pretty happy with it.

So, students can learn in a learner-centered classroom. But maybe they can learn in a more traditional classroom, too.

We don’t have LSCI data from previous years (note to self: think ahead! Collect standardized assessment data on classes before attempting any transformations!) To investigate if transforming the instruction class makes any difference, we re-used, word-for-word, a handful of questions from the same instructor’s 2008 Final Exam (pre-transformation) on this term’s Final Exam: 10 multiple-choice questions and 4 longer-answer questions. We made sure the questions assessed the concepts we covered in 2010 in sync with the learning goals.

I extracted students’ marks on these 14 questions from the 2010 exams (N=144). And from the old, 2008 exams (N=107), being sure to re-mark the longer-answer questions using the 2010 rubric. (Note to self #2: buy aspirin on the way home.)

What were we hoping for? Significant increase in student success in the transformed,  learner-centered course.

How I wish I could report that’s what we found. But I can’t. Because we didn’t. Here are the results:

Students scores on questions used on both the 2008 and 2010 Final Exams in the introductory astronomy course, ASTR 311. Error bars are standard error of the mean.

There is no significant difference in student success on the 10 multiple-choice questions. Their scores on the entire exams are also the same, though the exams are not identical, only about 1/4 of the 2008 exam is re-used in 2010. Nevertheless, these nearly identical Exam scores suggest the populations of students in 2008 and 2010 are about the same.  There are are differences in the 4 long-answer questions: the 2008 students did better than their 2010 counterparts.

Two things jumped out at me

  1. Why did they do so much better on the long-answer questions? I said we used the same marking rubric but we didn’t use the same markers. A team of teaching assistants marked the 2010 exams; I(re) marked the 2008 exams. The long-answer questions are work 10 marks, so a little more (or less) generosity in marking – half a mark here, half a mark there – could make a difference. I really need to get the same TAs to remark the 2008 exams. Yeah, like that’s gonna happen voluntarily. Hmm, unless there’s potential for a publication in the AER
  2. Why, oh why, didn’t they do better this year? Even if we omit the suspicious long-answer marks and look only at the multiple-choice questions, there is no difference. Did we fail?

No, it’s not a failure. The instructor reduced her lecturing time by 35%. We asked the students to spend 35% of their time teaching themselves. And it did no harm. The instructor enjoyed this class much than in 2008. We had consistent 75% attendance (it was much lower by the end of the term in 2008) and students were engaged in material each and every class. I think that’s a success.

The next step in this experiment is to look for retention. There is evidence in physics (see Pollock & Chasteen, “Longer term impacts…” here) that students who engage in material and generate their own knowledge retain the material longer. With that in mind, I hope to re-test these 2010 students with LSCI in about 3 months, after they’ve had a term to forget everything. Or maybe not…

But did they learn anything?

The course transformations I work on through the Carl Wieman Science Education Initiative (CWSEI) in Physics and Astronomy at UBC are based on a 3-pillared approach:

  1. figure out what students should learn (by writing learning goals)
  2. teach those concepts with research-based instructional strategies
  3. assess if they learned 1. via 2.

Now that we’ve reached the end of the term, I’m working on Step 3. I’m mimicking the assessment described by Prather, Rudolf, Brissenden and Schlingman, “A national study assessing the teaching and learning of introductory astronomy. Part I. The effect of interactive instruction,” Am. J. Phys. 77(4), 320-330 (2009) [link to PDF].  They looked for a relationship between the normalized learning gain on a particular assessment tool, the Light and Spectroscopy Concept Inventory [PDF], and the fraction of class time spent on interactive, learner-centered activities. They collected data from 52 classes at 31 institutions across the U.S.

The result is not a clear, more interaction = higher learning gain, as one might naively expect.  It’s a bit more subtle:

Learning gain on the LSCI and Interactive Assessment Score, essentially the fraction of class time spent on interactive instruction.  Each point represents one class with at least 25 students. (Prather et al, 2009)  Our UBC result from the Sep-Dec 2010 term is shown in green.

The key finding is this: In order to get learning gains above 0.30 (which means that over the course of the term, the students learn 30% of the material they didn’t know coming in) — and 0.30 is not a bad target — classes must be at least 0.25 or 25% interactive.  In other words, if your class is less than 25% interactive, you are unlikely to get learning gains (yes, as measured by this particular tool) above 30%.

Notice it does not say that highly interactive classes guarantee learning — there are plenty of highly-interactive classes with low learning gain.

Back in September, I started recording how much time we spent on interactive instruction in our course, ASTR 311. Between think-pair-share clicker questions, Lecture-tutorial worksheets and other types of worksheets, we spent about 35% of total class time on interactive activities.

We ran the LSCI as a pre-test in early September, long before we’d talked about light and spectroscopy, and again as a post-test at the end of October, after the students had seen the material in class and in a 1-hour hand-on spectroscopy lab. The learning gain across 94 matched pairs of tests (that is, using the pre- and post-test scores only for students who wrote both tests) came out to 0.42. Together, these statistics put our class nicely in the upper end of the study. They certainly support the 0.30/25% result.

Cool.

Okay, so they learned something.  How come?

The next step is to compare student performance before and after this term’s course transformation. We don’t have LSCI data from previous years, but we do have old exams. On this term’s final exam,  we purposely re-used a number of questions from the pre-transformation exam. I just need to collect some data – which means re-marking last year’s final exam using this year’s marking scheme. Ugh. That’ s the subject of a future post…

Navigation