Tag: teaching

The Power of Misconception

“Misconception” is one of those words that makes you slump your shoulders and sigh. It’s not inspiring like “creativity” or “glee.” In fact, in education circles we often resort to “alternate conception” so we’re not starting the conversation at a bad place.

In this post, I want to share with you some beautiful new research on how misconception affects teaching and learning.

In the 6 March 2013 issue of the American Education Research Journal, Philip M. Sadler, Gerhard Sonnert, Harold P. Coyle, Nancy Cook-Smith and Jaimie L. Miller describe “The Influence of Teachers’ Knowledge on Student Learning in Middle School Physical Science Classrooms”. Those of us in astronomy education immediately recognize Phil Sadler. His “A Private Universe” video is must-see for every astronomy instructor, K-12 and beyond.

Here’s what Sadler et al. did in the present study.

They created a 20-question, multiple-choice quiz based on concepts taught in middle school science: properties and changes in properties of matter, motion and forces, and transfer of energy. They chose concepts where kids have a common misconception, for example,

Electrical circuits provide a means of transferring electrical energy when heat, light, sound and chemical changes are produced (with common misconception that electricity behaves in the same way as a fluid.) (p. 12)

With the test in hand, they recruited 100’s of seventh and eighth grade science teachers in 589 schools across the U.S. They asked them to give the test at the beginning, middle and end of the year, to look for signs of learning. By the end of testing, there were matching sets of tests from 9556 students and 181 teachers. In other words, a big enough N that the data could mean something.

By looking at the students’ responses, the authors were able to classify the 20 questions into 2 types:

  • for 8 questions, some students got them right, some got them wrong, with no pattern in the wrong answers. They call these “no misconception” questions.
  • for 12 questions, when students got them wrong, 50% or more chose the same incorrect answer, a carefully chosen distractor. These questions are called “strong misconception” questions.

Sadler et al. also had the students write math and reading tests. From their scores, the students were classified as “high math and reading” or “low math and reading”.

They did something else, too, and this is what makes this study interesting. They asked the teachers to write the test. Twice. The first time, the teachers answered as best they could. Their scores are a measure of their subject matter knowledge (SMK). The second time, the teachers were asked to identify the most common wrong answer for each question. How often they could identify the common wrong answer in the strong misconception questions is the teachers’ knowledge of student misconception (KoSM) score.

With me so far? Students with high or low math and reading skills have pre- and post-scores to measure their science learning gain. Teachers have SMK and KoSM scores.

Do you see where this is going? Good.

There’s a single graph in the article that encapulates all the relationships between student learning and teachers SMK and KOSM. And it’s a doozie of a graph. Teaching students how to read graphs, or more precisely, teaching instructors how to present graphs so students learn how to interpret them, is something I often think about. So, if you’ll permit me, I’m going to present Sadler’s graph like I’d present it to students.

First, let’s look at the “architecture” of the axes before we grapple with the data.

Let's look at the axes of the graph first, before the data blind us. (Adapted from [1])
Let’s look at the axes of the graph first, before the data overwhelm us. SMK = teachers’ subect matter knowledge; KoSM is the teachers’ knowledge of student misconceptions.  (Adapted from Sadler et al. (2013))
The x-axis give the characteristics of the science teachers (no SMK,…, SMK & KoSM) who taught the concepts for which students show no misconception or strong misconception. Why are there 3 categories for Strong Misconception but only 2 for No Misconception? Because there is no misconception and no KoSM on the No Misconception questions. What about the missing “KoSM only” condition? There were no teachers who had knowledge of the misconceptions but no subject matter knowledge. Good questions, thanks.

Cohens_d_4panel_wikipedia_CCThe y-axis measures how much the students learned compared to their knowledge on the pre-test given at the beginning of the school year. This study does not use the more common normalized learning gain, popularized by Hake in his “Six-thousand student” study. Instead, student learning is measured by effect size, in units of the standard deviation of the pretest. An effect size of 1, for example, means the average of the post-test is 1 standard deviation higher than the average of the pre-test, illustrated in the d=1 panel from Wikipedia. Regardless of the units, the bigger the number on the y-axis, the more the students learned from their science teachers.

And now, the results

This is my post so I get to choose in which order I describe the results, in a mixture of  the dramatic and the logical. Here’s the first of 4 cases:

Students who scored low on the reading and math tests didn't do great on the science test, though the ones who had knowledgeable teachers did better. (Graph adapted from Sadler et al. (2013))

The students who scored low on the reading and math tests didn’t do great on the science test either, though the ones who had knowledgeable teachers (SMK) did better. Oh, don’t be mislead into thinking the dashed line between the circles represents a time series, showing students’ scores before and after. No, the dashed line is there to help us match the corresponding data points when the graph gets busy. The size of the circles, by the way, encodes the number of teachers with students in the condition. In this case, there were not very many teachers with no SMK (small white circle).

Next, here are the learning gains for the students with low math and reading scores on the test questions with strong misconceptions:

Students with low math and reading scores did poorly on the strong misconception questions, regardless of the skill of their teachers. (Adapted from Sadler et al. (2013))
Students with low math and reading scores did poorly on the strong misconception questions, regardless of the knowledge of their teachers. (Adapted from Sadler et al. (2013))

Uh-oh, low gains across the board, regardless of the knowledge of their teachers. Sadler et al. call this “particularly troubling” and offer these explanations:

These [strong misconception questions] may simply have been misread, or they may be cognitively too sophisticated for these students at this point in their education, or they many not have tried their hardest on a low-stakes test. (p. 22)

Fortunately, the small size of the circles indicates there were not many of these.

What about the students who scored high on the math and reading tests? First, let’s look at their learning gains on the no-misconception questions. [Insert dramatic drum-roll here because the results are pretty spectaculars.]

Students with knowledgeable teachers exhibited huge learning gains. (Adapted from Sadler et al. (2013))
Students with knowledgeable teachers exhibited huge learning gains. (Adapted from Sadler et al. (2013))

Both black circles are higher than all the white circles: Even the students with less-knowledgeable teachers (“no SMK”) did better than all the students with low math and reading scores. The important result is how much higher students with knowledgeable teachers scored, represented by the big, black circle just north of effect size 0.9. Science teachers with high subject matter knowledge helped their students improve by almost a full standard deviation. Rainbow cool! The large size of that black circle says this happened a lot. Double rainbow cool!

Finally we get to the juicy part of the study: how does a teacher’s knowledge of the students’ misconceptions (KoSM) affect their students’ learning?

Subject matter knowledge alone isn't enough. To get significant learning gains in their students, teachers also need knowledge of the misconceptions. (Adapted from Sadler et al. (2013))
Subject matter knowledge alone isn’t enough. To get significant learning gains in their students, teachers also need knowledge of the misconceptions. (Adapted from Sadler et al. (2013))

Here, students with knowledgeable teachers (I guess-timate the effect size is about 0.52) do only slightly better than students with less knowledgeable teachers (effect size around 0.44). In other words, on concepts with strong misconceptions, subject matter knowledge alone isn’t enough. To get significant learning on these strong misconception concepts, way up around 0.70, teachers must also have knowledge of those misconceptions.

Turning theory into practice

Some important results from this ingenious study:

  • students with low math and reading skills did poorly on all the science questions, despite the knowledge of their teachers, once again demonstrating that math and reading skills are predictors of success in other fields.
  • Teachers with subject matter knowledge can do a terrific job teaching the concepts without misconceptions, dare we say, the straightforward concepts. On the trickier concepts, though, SMK is not enough.
  • Students bring preceptions to the classroom. To be effective, teachers must have knowledge of their students’ misconceptions so they can integrate that (mis)knowledge into the lesson. It’s not good enough to know how to get a question right — you also have to know how to get it wrong.

Others, like Ed Prather and Gina Brissenden (2008), have studied the importance of teachers’ pedagogical content knowledge (PCK). This research by Sadler et al. shows that knowledge of students’ misconceptions most definitely contributes to a teacher’s PCK.

If you use peer instruction in your classroom and you follow what Eric Mazur, Carl Wieman, Derek Bruff and others suggest, the results of this study reinforce the importance of using common misconceptions as distractors in your clicker questions. I’ll save it for another time, though; this post is long enough already.

Epilogue

Interestingly, knowledge of misconceptions is just what Derek Muller has been promoting over at Veritasium. The first minute of this video is about Khan Academy but after that, Derek describes his Ph.D. research and how teachers need to confront students’ misconceptions in order to get them to sit up and listen.

 

If you’ve got 8 more minutes, I highly recommend you watch. Then, if you want to see how Derek puts it into practice, check out his amazing “Where Do Trees Get Their Mass From?” video:

Update 6/6/2013 – I’ve been thinking about this paper and post for 3 months and only today finally had time to finish writing it. An hour after I clicked Publish, Neil Brown (@twistedsq on Twitter) tweeted me to say he also, today, posted a summary of Sadler’s paper. You should read his post, too, “The Importance of Teachers’ Knowledge.” He’s got a great visual for the results.

Another Update 6/6/2013  – Neil pointed me to another summary of Sadler et al. by Mark Guzdial (@guzdial on Twitter) “The critical part of PCK: What students get wrong” with links to computer science education.

Gearing up for #etmooc

(Image adapted from picture by Ed Yourdon on flickr CC)
Let’s use technology in class for learning. (Image adapted from picture by Ed Yourdon on flickr CC)

You know what makes me cringe? When a professor complains about his not paying attention in class “because they’re on their computers [dramatic pause] facebooking!”

My instinctive response is to ask

  1. Do you know their on facebook and not working on an essay or checking their email or watching sports? Don’t presume to know what your students are doing when they’re not entranced by your presentation.
  2. And just why do you think that is, anyway? Why don’t they need to be engaged with the concepts you’re lecturing about? Hint: it probably has something to do with “you’re *lecturing* about”.
  3. Why do you believe laptops and smartphones in class are evil?

I don’t actually say these things, though. Bad for recruiting faculty into committing their time and energy to transform their instructor-centered lectures into student-centered instruction.

Instead, I just grimace, shake my head a bit, and say,”—” Honestly I don’t really know what to say to spark the conversation that is the first step of changing their misconceptions about computers and smartphones in the classroom.

I have a vision of what I’d like to see in university classes when it comes to technology:

I want every student so engaged with the material and actively constructing their own understanding that they have neither the time nor the desire to disengage to check their smartphones, or

I want to see everyone using their smartphones and laptops for learning: googling things, running simulations, writing a googledoc with the rest of the class, tweeting the expert in the field, finding a Pinterest collection,…

That’s a long way from a grimace and a head shake. What I need are the words, concepts and tools that can bring technology into education in an effective and efficient way.

etmooc_logo
(etmooc badge from etmooc.org)

Which is why I’m so excited about #etmooc. It’s a massive, open, online course (mooc) about educational technology and media, starting in January 2013. I’m interested in the content and tools we’ll be exploring. (Psst — and secretly, I’m interested in watching how the thing runs. If there’s anyone that can figure out how to make a mooc effective, it’s Alec Couros @courosa and the team he’s assembled.)

Each participant (there are over 1200 of us now) will be using their own blog to post reflections, opinions, whatever else he’s got in store for us. I’ll be tagging all  my posts with etmooc so their easier to find.

A Tale of Two Comets: Evidence-Based Teaching in Action

Comet McNaught
Comet McNaught wow'd observers in the Southern Hemisphere in 2007. (Image by chrs_snll on flickr CC)

We often hear about “evidence-based teaching and learning.” In fact, it’s a pillar of the approach to course development and transformation that we follow in the Carl Wieman Science Education Initiative.

It’s a daunting phrase, though, “evidence-based teaching and learning.” It sounds like I have to find original research in a peer-reviewed article, read and assimilate the academic prose, and find a way to apply that in my classroom. Does a typical university instructor have the time or motivation? Not likely.

It doesn’t have to be like that, though. There are quicker, easier analyses and subsequent modifications of materials that, in my opinion, qualify as evidence-based teaching. Let me share with you an example from an introductory, general-ed “Astro 101” astronomy course. First, a bit of astronomy.

Comets and their tails

Comets are dusty snowballs of water ice and other material left over from the formation of the Solar System. The comets we celebrate, like Comet Halley, travel along highly-elongated, elliptical orbits that extend from the hot, intense region near the Sun to the cold, outer-regions of the Solar System.

Comet's tail
A comet's tails point away from the Sun. The comet is orbiting clockwise in this diagram so the yellow dust tail trails slightly behind the blue ion tail.

As comets approach the Sun, like Comet Halley does every 76 years, the comet’s nucleus warms up. The ice turns to gas which creates a sometimes-spectacular tail. The tail grows larger and larger, streaming out behind the comet until it rounds the Sun and begins to head back out into the Solar System. That’s when something interesting happens. Well, another interesting thing, that is. You may think the comet’s tail streams out behind like the exhaust trail (the contrail) of an airplane but once the comet rounds the Sun, the tail swings around ahead of the comet. Yes, the nucleus follows the tail. That’s because the tail is blown outward by the solar wind so that the tail of a comet always points away from the Sun. (Well, there are actually 2 tails. The ion tail is strongly influenced by the solar wind – it’s the one blown directly away from the Sun. A dust trail also interacts gravitationally with the Sun, causing it to curl out behind the ion tail.)

Teaching and learning

It’s not what you’d expect, the tail wagging the dog. And that’s make it a great opportunity for peer instruction and follow-up summative assessment.

Last December, the course’s instructor and I sat down to write the final exam. We could have used a multiple-choice question

The ion tail of a comet always…
A) points away from the Sun
B) trails behind the comet
C) D) E) [other distractors]

Or perhaps a more graphical version, like this one from the ClassAction collection of concept questions:

Comet Trajectories concept question from ClassAction
A concept question about the shape of a comet's tail from the ClassAction collection at the University of Nebraska - Lincoln. The correct answer is C, by the way.

Both of these questions are highly-susceptible to success-by-recognition where the student doesn’t really know the answer until s/he recognizes it in the options. “What do comets’ tails do again? Oh right, they point away from the Sun.”

Instead, we decided on a question that better assessed their grasp of how comet tails behave. The cost is, this question is more difficult to mark:

Assessment

Oh, the question was marked out of 2, 1 pt for each tail pointing away from the Sun. That’s not the kind of assessment I mean, though. I’m talking about the assessment that goes into evidence-based teaching and learning. How did the students respond to this question? What it a good test of their understanding?

I went through the stack of N=63 exams and sorted them into categories. It wasn’t hard to come up with those categories, it was pretty obvious after the first 10 papers.

  • 46 students: tails of equal lengths pointing away from the Sun. Yep, 2 out of 2.
  • 5 students: tails of equal lengths pointing away from the Sun with guidelines. Nice touch, reinforcing why you drew the tails the way you did. 2 out of 2. And some good karma in case you need the benefit of the doubt later on the exam.
  • 3 students: drew ion tail correctly and dust tail mostly correct. Good karma for adding extra detail, though the dust trail is too much traily-behindy. Be careful, kids, when you write more than is asked for – you could lose marks.
  • 1 student: tails with (correctly) unequal lengths pointing away from the Sun. Oh, very good! Maybe 3 out of 2 for this answer!
  • 8 students: various incorrect answers. I like this first one (“Oh, geez, there’s something about pointing and the Sun, isn’t there? Ummm…”)

Evidence-based teaching

It’s clear that the vast majority of students grasp the concept that a comet’s tail points away from the Sun. Terrific!

So why are we wasting this question on such an obvious bit information, then? Let me put that another way:  These students are evidently, and I mean evidently, capable of learning more about comets. We thought this <ghost> “Oooooo, watch oouuttt! Comet tails point awaaaaayyyy from the Suuuun…” </ghost> concept would be difficult enough. Nope, yhey surprised us. So let’s crank it up next year. Let’s explore the difference between the ion and dust tails. And that the length of the tail changes as the comet approaches and recedes from the Sun. Next year, the answer that gets full marks will be the one with

  • 2 tails at each position,
  • the ion tail pointing away from the Sun,
  • the dust tail lagging slightly behind the ion tail,
  • short tails at the far location, large tails at the close location

That’s evidence-based teaching and learning. Find out what they know and then react by building on it and leveraging it to explore the concept deeper (or shallower, depending on the evidence.) It’s not difficult. It doesn’t require poring over Tables of Contents, even in the excellent Astronomy Education Review. All it requires is small amount of data collection, analysis and ability to use the information. Hey, those are all qualities of a good scientist, aren’t they?

Navigation