Tag: research

Constructing your own knowledge is not "edu-babble"

First, a disclosure: I’d love to pepper this posting with links to journal articles here, there and everywhere. But the truth is, if I try to do that, I’ll never get it written. If only I had a massive library of refs in my head like some of my colleagues. So here goes the “I’ll add refs later” version.

On February 8, the Vancouver Sun published a column by Michael Zwaagstra entitled “Purdue University study confronts edu-babble” (Hat-tip to @chrkennedy.)

<raised>hackles</raised>

The lead paragraph concludes

Instead of telling students what they need to learn, teachers should encourage them to construct their own understanding of the world around them. The progressive approach to education is far more useful to students than the mindless regurgitation of mere facts.”

A reasonable philosophy. One I agree with, in fact. And no, I didn’t forget to copy the opening quotation mark. It was omitted. Maybe that’s Vancouver Sun style. Or maybe it’s to hide the fact that this paragraph is a strawman about to knocked down by the author, who begins his actual column with

“Anyone involved in education knows these types of edu-babble statements are often heard in teacher-training institutions. Education professors continually push teachers to move away from traditional methods of instruction.”

The author goes on to cite a new study in Sciencexpress (20 Jan 2011) Science (11 Feb 2011) by Jeffrey D. Karpicke and Janell R. Blunt, “Retrieval Practice Produces More Learning than Elaborative Studying with Concept Mapping.” Let me describe that research first and then come back to how Zwaagstra presented it.

Karpicke and Blunt did quite a nice study comparing, among other things, the final test scores of four groups of students

  1. study-once: students studied the text in 1 study session
  2. repeated study: students studied the text in 4 consecutive study session
  3. elaborative studying with concept mapping: after instruction on how to create a concept map, students created concepts maps of the concepts in the text. This activity plays the role of “constructing their own knowledge” in the journal article and Zwaagstra’s newspaper column.
  4. retrieval practice: students studied the text in one study session, then practiced retrieval by trying to recall as much as they could. Then they restudied and recalled a second time. The authors made sure the students in this group and the concept mapping group had the same time-on-task.

When these learning activities were complete, all students wrote the same short-answer test which contained both “verbatim” questions testing knowledge stated in the text and “inference” questions that required students to assemble various facts. For both types of questions, the retrieval practice group scored the highest, followed by the repeated study, concept mapping and study once groups. In both types of questions, the retrieval practice scores were statistically significantly higher than other scores. The article goes on to describe how they replicated the study, with similar results.

Hmm, interesting result. I wonder… no, sorry, back to the Vancouver Sun article.

Fine. Studying helps students succeed on tests.No one would argue against that. And concept mapping certainly has its strengths but it is just one approach to “constructing your own understanding.”

Zwaagstra uses the Purdue result to support the practice of testing students regularly on content knowledge. No problem with that. And that Provinces which are abandoning standardized testing are falling prey to “anti-testing mantra”. Hmm, not sure about that. And that learner-centered instruction is “edu-babble”. Okay, that pissed me off:

I’m relieved to say I wasn’t the only one, based on the handful of RTs and replies I received from @cpm5280, @mcshanahan, @ScientificChick, @chrkennedy, @sparkandco and @derekbruff, all tweeps whose opinions I value.

Right – everyone is entitled to their opinion. Zwaagstra is sharing his, just like I’m sharing mine. But wait, this isn’t an opinion piece – it’s a newspaper report:

Well, in fact, a friend tells me the online Vancouver Sun just tacks “Vancouver Sun” credentials onto the author. At the bottom of the article, we discover Mr. Zwaagstra is a research fellow with the Frontier Centre for Public Policy, a “think-tank” [their quotes] supporting Canada’s prairie provinces. So this is not an objective piece of journalism about new result in education research. It’s an opinion piece written on behalf of the Frontier Centre to support their philosophy. The Vancouver Sun should have made that a lot clearer. And did they really have to use the most sensational word in the entire story, “edu-babble”, in the headline? How about a little less tabloid next time, huh? In hindsight, maybe that pissed me off just as much as Zwaagstra’s lampooning of decades of education research and practice.

So, I’ll stay vigilante to stories which misrepresent science. But in the end, I’ll also follow Derek Bruff’s advice:

Evidence of Learning in Astronomy

Throughout the Sep-Dec, 2010 term, I worked with an astronomy instructor to create a more learner-centered classroom. As I described elsewhere, we spent just over one third of the instructional time on interactive activities: think-pair-share using clickers, lecture-tutorial worksheets, ranking tasks and a couple of predict-observe demonstrations. And it resulted in a learning gain of 0.42 on a particular assessment tool, the LSCI. That means the students learned 42% of the concepts they didn’t already know at the beginning of the term. That’s not bad — we’re pretty happy with it.

So, students can learn in a learner-centered classroom. But maybe they can learn in a more traditional classroom, too.

We don’t have LSCI data from previous years (note to self: think ahead! Collect standardized assessment data on classes before attempting any transformations!) To investigate if transforming the instruction class makes any difference, we re-used, word-for-word, a handful of questions from the same instructor’s 2008 Final Exam (pre-transformation) on this term’s Final Exam: 10 multiple-choice questions and 4 longer-answer questions. We made sure the questions assessed the concepts we covered in 2010 in sync with the learning goals.

I extracted students’ marks on these 14 questions from the 2010 exams (N=144). And from the old, 2008 exams (N=107), being sure to re-mark the longer-answer questions using the 2010 rubric. (Note to self #2: buy aspirin on the way home.)

What were we hoping for? Significant increase in student success in the transformed,  learner-centered course.

How I wish I could report that’s what we found. But I can’t. Because we didn’t. Here are the results:

Students scores on questions used on both the 2008 and 2010 Final Exams in the introductory astronomy course, ASTR 311. Error bars are standard error of the mean.

There is no significant difference in student success on the 10 multiple-choice questions. Their scores on the entire exams are also the same, though the exams are not identical, only about 1/4 of the 2008 exam is re-used in 2010. Nevertheless, these nearly identical Exam scores suggest the populations of students in 2008 and 2010 are about the same.  There are are differences in the 4 long-answer questions: the 2008 students did better than their 2010 counterparts.

Two things jumped out at me

  1. Why did they do so much better on the long-answer questions? I said we used the same marking rubric but we didn’t use the same markers. A team of teaching assistants marked the 2010 exams; I(re) marked the 2008 exams. The long-answer questions are work 10 marks, so a little more (or less) generosity in marking – half a mark here, half a mark there – could make a difference. I really need to get the same TAs to remark the 2008 exams. Yeah, like that’s gonna happen voluntarily. Hmm, unless there’s potential for a publication in the AER
  2. Why, oh why, didn’t they do better this year? Even if we omit the suspicious long-answer marks and look only at the multiple-choice questions, there is no difference. Did we fail?

No, it’s not a failure. The instructor reduced her lecturing time by 35%. We asked the students to spend 35% of their time teaching themselves. And it did no harm. The instructor enjoyed this class much than in 2008. We had consistent 75% attendance (it was much lower by the end of the term in 2008) and students were engaged in material each and every class. I think that’s a success.

The next step in this experiment is to look for retention. There is evidence in physics (see Pollock & Chasteen, “Longer term impacts…” here) that students who engage in material and generate their own knowledge retain the material longer. With that in mind, I hope to re-test these 2010 students with LSCI in about 3 months, after they’ve had a term to forget everything. Or maybe not…

Navigation