Tag: think-pair-share

But did they learn anything?

The course transformations I work on through the Carl Wieman Science Education Initiative (CWSEI) in Physics and Astronomy at UBC are based on a 3-pillared approach:

  1. figure out what students should learn (by writing learning goals)
  2. teach those concepts with research-based instructional strategies
  3. assess if they learned 1. via 2.

Now that we’ve reached the end of the term, I’m working on Step 3. I’m mimicking the assessment described by Prather, Rudolf, Brissenden and Schlingman, “A national study assessing the teaching and learning of introductory astronomy. Part I. The effect of interactive instruction,” Am. J. Phys. 77(4), 320-330 (2009) [link to PDF].  They looked for a relationship between the normalized learning gain on a particular assessment tool, the Light and Spectroscopy Concept Inventory [PDF], and the fraction of class time spent on interactive, learner-centered activities. They collected data from 52 classes at 31 institutions across the U.S.

The result is not a clear, more interaction = higher learning gain, as one might naively expect.  It’s a bit more subtle:

Learning gain on the LSCI and Interactive Assessment Score, essentially the fraction of class time spent on interactive instruction.  Each point represents one class with at least 25 students. (Prather et al, 2009)  Our UBC result from the Sep-Dec 2010 term is shown in green.

The key finding is this: In order to get learning gains above 0.30 (which means that over the course of the term, the students learn 30% of the material they didn’t know coming in) — and 0.30 is not a bad target — classes must be at least 0.25 or 25% interactive.  In other words, if your class is less than 25% interactive, you are unlikely to get learning gains (yes, as measured by this particular tool) above 30%.

Notice it does not say that highly interactive classes guarantee learning — there are plenty of highly-interactive classes with low learning gain.

Back in September, I started recording how much time we spent on interactive instruction in our course, ASTR 311. Between think-pair-share clicker questions, Lecture-tutorial worksheets and other types of worksheets, we spent about 35% of total class time on interactive activities.

We ran the LSCI as a pre-test in early September, long before we’d talked about light and spectroscopy, and again as a post-test at the end of October, after the students had seen the material in class and in a 1-hour hand-on spectroscopy lab. The learning gain across 94 matched pairs of tests (that is, using the pre- and post-test scores only for students who wrote both tests) came out to 0.42. Together, these statistics put our class nicely in the upper end of the study. They certainly support the 0.30/25% result.

Cool.

Okay, so they learned something.  How come?

The next step is to compare student performance before and after this term’s course transformation. We don’t have LSCI data from previous years, but we do have old exams. On this term’s final exam,  we purposely re-used a number of questions from the pre-transformation exam. I just need to collect some data – which means re-marking last year’s final exam using this year’s marking scheme. Ugh. That’ s the subject of a future post…

Wasn’t expecting Him in class

In the #astro101 class I’m working on, we just reached the “what is life” section. Great timing, considering the new @NASA astrobiology discovery of a bacteria that, unlike every other living creature, uses arsenic instead of phosphorus in its DNA.

We were going to have a PPT slide that listed 4 “generally agreed-upon” characteristics of life

Four “generally agreed-upon” characteristics of “life”. Kind of a boring PPT slide for such an intersting topic, no?

<Yawn> I suggested to the course instructor we switch it into a #clicker question, to get the students to critically think about each characteristic and then compare them to what they think “life” means:

The same content posed as a clicker question to, er, lure the students into thinking about each characteristic.

I intentionally added the last choice “E) other ______” so students could add their own ideas. The instructor and I talked about it ahead of time, and agreed that if students chose E), we’d invite them to share their ideas with the class.

Fast forward to class. We pose the question, not as a think-pair-share sequence but just inviting them to discuss it with their neighbours. Then the students voted.

Students’ votes for A, B, C, D, E.

Excellent – 4 others. Wonder what they are?

“What other characteristics should a life form have?”

Then the shocker. From the back of the room comes

“God!”

In hindsight, we should have expected that! But we weren’t prepared for it. Kudos to the instructor, though: without even a pause, she replied, “Well, we’re not going to add religion and philosophy to this science class. Okay, let’s see how these 4 characteristic apply…”

The student’s answer was a great one. It told us he’d thought about the question we posed and compared it to his own knowledge, experience and beliefs. Who could ask for anything more? Be warned, though: if you want to invite your students to bring their religion into your astronomy class, be prepared – you can’t just wing it. (I did that once. Big mistake. Made me look pretty – no, make that very – ignorant.) And if you’re not familiar with the spectrum of religious beliefs in your classroom, you might want to reconsider the conversation before you start it. Why not be up front about it with your students:

Whenever people talk about the origin of life, some will undoubtedly want to include their religious beliefs. In this class, though, we’re going to stick to the scientific aspects of the discussion, the aspects that can be predicted, observed, proved or disproved by the scientific method. Now, about those scientific characteristics of life…

Clicker questions should be integrated, not jammed in later

The CWSEI group at UBC gets together every week to discuss a journal article. This week, it was a new article by Melissa Dancy and Charles Henderson “Pedagogical practices and instructional change of physics faculty,” Am. J. Phys. 78 (2010).

One of the questions explored in the paper is, why don’t physics faculty members adopt the research-based instructional strategies that so many have already heard of? Mazur-style peer instruction (PI) using clickers, for example.

Dancy & Henderson discovered that nearly two-thirds (64%) of the 722 faculty who completed their survey were familiar with PI and 29% actually used it in their classes. But on further probing, it turned out only 27% of that 29% (we’re down to about 8% now) had students discussing ideas and solving problems multiple times per class. It appears that a lot of physics faculty members equate “peer instruction” with “yeah, I’ve got clickers in my class.” The technology is there but it’s not being implemented in a way that promotes learning. Continue reading

Navigation