Tag: assessment

Engage EVERY student with a jigsaw

(This is a long, detailed post about creating and running a “jigsaw” activity. Mostly, I wrote it for myself before I forget all the details. Reinventing the wheel is bad enough – reinventing your own wheel is even worse!)

The other day, I ran a jigsaw activity in my teaching and learning course. Jigsaw’s are a great activity if you have a lot of content to cover in a number of contexts. My colleague, David J. Gross at UMass Amherst, explained it to me this way: Suppose your lesson is about 5 National Parks. A traditional lecture about those 5 Parks, with N PowerPoint slides giving the details about each Park means 5N slidezzzzzzz.

Here’s how a jigsaw activity works. In Step 1, you group students together, with each group exploring one National Park. They become the local experts on that Park, working together to bring themselves up to shared, higher level of knowledge:

In Step 1 of the jigsaw, these 20 students work in 5 groups to become experts on 5 different National Parks. (Figure by Peter Newbury CC.)
In Step 1 of the jigsaw, these 20 students work in 5 groups to become experts on 5 different National Parks. (Figure by Peter Newbury CC-BY.)

In Step 2, you take it all apart and put it back together, like a jigsaw puzzle, so that each group has an expert about each of the 5 National Parks. In each group, they teach each other about each Park. In the end, every student has learned about each Park.

In Step 2 of the jigsaw, the students re-arrange themselves so each group has an expert about each National Park. (Figure by Peter Newbury CC-BY.)

Did you notice how much lecturing about National Parks the instructor did? Zero. Zippo. Zilch. Instead of a single long exposition by the instructor, there are 4 student-centered conversations happening in parallel. It might even take less class time, or, if the time is already allocated, it gives more time for each National Park.

Cool, huh? Instructor gets to do nothing!

Well, nothing except a whole lot of planning and choreographing so students can stay engaged in concepts and not wondering what to do or wandering around looking for a group.

My jigsaw: Formative assessment that supports learning

In my teaching and learning class, we were discussing practice and formative feedback that supports learning. Following Chapter 5 of How Learning Works, instructors should ensure

  • practice is goal-directed
  • practice is productive
  • feedback is timely
  • feedback is at the appropriate level

To help explore these characteristics, I decided to use two tools:

analogy: How People Learn advises us that “students come to the classroom about preconceptions about how the world works” (p.14) and therefore, “[t]eachers must draw out and work with that preexisting understandings that their students bring with them.” (p.19) I wanted my students to think about those 4 characteristics first through their experiences of a sport or hobby and then in the context of teaching and learning.

contrasting cases: Again from How People Learn, “[t]eachers must teach some subject matter in depth, providing many examples in which the same concept is at work and providing a firm foundation of factual knowledge.” (p. 20) Contrasting cases are a way to present the same concept twice. And sometimes, the a good way to figure out what something IS, is to figure out what it’s NOT.

For each characteristic, like timely feedback, I wanted students to come up with scenarios of

  • untimely feedback in a sport/hobby experience (“bad, sport/hobby”)
  • timely feedback in a sport/hobby experience (“good, sport/hobby”)
  • untimely feedback in teaching and learning (“bad, teaching and learning”)
  • timely feedback in teaching and learning (“good, teaching and learning”)

That’s 4 characteristics x 4 scenarios each = 16 different scenarios in total. There’s NO WAY I’m going to make 16N slides and flick through them.

Let’s jigsaw, I said to myself. But how? How do I choreograph Step 1 (prepare expertise) and Step 2 (share expertise)? I started from the end and worked backwards.

Here’s what I wanted the Step 2 conversations to look like:

Each group has an expert about each characteristic, and they teach and learn from each other. (Photo by Peter Newbury CC-BY)

Each group would have one student sharing expertise about one of the characteristics

  • practice is goal-directed (green)
  • practice is productive (blue)
  • feedback is timely (purple)
  • feedback is at the appropriate level (orange)

and each student would be prepared to share 4 scenarios

I. “bad” in sport/hobby
II. “good” in sport/hobby
III. “bad” in teaching and learning
IV. “good” in teaching and learning

I have about 20 students in each session of the class, so that means I’ll have 5 groups at the end. If there are additional students #21, #22, and #23, they can double-up in some groups. As soon as I have 24 students, #21 thru #24 can form their own discussion group.

Look back at the picture of the final discussion groups showing Step 2 of the jigsaw activity. To create that (5 times), in Step 1 I’ll need 5 people teaching each other about green, 5 blue, 5 purple, and 5 orange.

Choreographing with Colored Paper

There’s a lot of “structure” that needs to be built into this activity

  • each student is assigned to a characteristic / color
  • each student needs to know what their Step 1 discussion is about
  • students need to sit in a one-color groups for Step 1
  • students need to move to an every-color groups for Step 2
  • probably more…

I can’t waste a lot of time making this happen during class. What tools do I have at my disposal for structuring this activity? COLORED PAPER (As simple as it sounds, colored paper is one of my favorite pieces of education technology.)

I created 4 worksheets, one for each characteristic, and copied them onto colored paper. I interlaced the worksheets and put the stack at the classroom door. I arranged the tables and chairs into 4 stations with 5-6 chairs each, and placed a colored sheet of paper on each station [Oh yeah, I forgot about that! That’s why I’m writing this.] When the students entered, they took the top worksheet and sat at that color’s station.

I copied 4 worksheets onto 4 colors of paper and interlaced the copies. As students grabbed the top sheet, they were perfectly divided into groups.
I copied 4 worksheets onto 4 colors of paper and interlaced the copies. As students grabbed the top sheet, they were perfectly divided into groups. (Photo by Peter Newbury CC-BY)

The ultimate goal is for us to have a class-wide discussion of good teaching practices to support learning. The jigsaw activity should prepare every student to contribute to that conversation but I didn’t want students to spend too much time in Step 2 sharing their experiences and ideas about sports/hobbies and about “bad” teaching practices. I also wanted students to discover how intertwined those 4 characteristics are: to provide productive practice, you need it to be goal-oriented, and so on.

I needed a way to slice and re-mix the scenarios so the students discussed them by scenario (“bad” in sport/hobby,…,”good” in teaching and learning) rather than by characteristic (practice is goal-directed,…, feedback is at the appropriate level). So that’s exactly what I did: I sliced. Well, they sliced.

If you look at the picture of the worksheets above, you’ll notice some dashed lines. At the end of Step 1, I instructed the students to tear their colored worksheets into quarters along the dashed lines. (Notice, also, each quarter has a I, II, III, IV label.) Then I invited them to re-organize themselves into groups so that each group had a representative of each color. That was easy for them to do because they could easily see what colors were already at each table. Since there were equal numbers of each color (because the worksheets were interlaced in the stack at the classroom door) there was a place for everyone and everyone had a place.

Students sliced their worksheets into quarters so they could share by scenario (I, II, III, IV) rather than by characteristics of assessment. This emphasized how good formative assessment combines all the characteristics. (Photo by Peter Newbury CC-BY)
Students sliced their worksheets into quarters so they could share by scenario (I, II, III, IV) rather than by characteristics of assessment. This emphasized how good formative assessment combines all the characteristics. Note: I scribbled over the students’ names on their name badges. (Photo by Peter Newbury CC-BY)

Settled in every-colored groups, they worked their way through the 4 scenarios I, II, III, IV of practice and assessment that supports learning. I could easily see what scenario they were discussing and could nudge them towards the important, scenario IV discussion if they were lagging behind.

Darn, I forgot to keep track of the time while I ran this jigsaw but I seem to remember it taking about 20 minutes for Step 1 and Step 2, and then another 10 minutes or so for the class-wide discussion about the characteristics of formative assessment that support learning (scenario IV).

The classroom was loud with expert-like discussions about teaching and learning. Twenty brains were engaged. Twenty students left knowing a lot about practice and assessment that supports learning. And knowing that their own experiences and knowledge played a critical role in the learning of their classmates. They can ask themselves,”Did I contribute to class today? Was the class better because I was there?” Yes and yes.

Big question: why bother?

If it took me this long to write down on these steps, you know it took even longer to design (and re-design) the materials, plan and rehearse the choreography, prepare the materials, re-arrange the classroom furniture, and more. It would have a been a helluvalot easier for me to present 4 slides, one on each of the characteristics of formative assessment (or easier still, one slide with 4 bullet points.)

But that’s not what we do.

Of course there are practical considerations but how easy it is for ME is not what drives how I design my lessons. Rather, I challenge myself to create opportunities for EVERY student to practice thinking about and discussing the issues and concepts. One thing I love about these jigsaw activities is that every student has a well-defined job (share their expertise in Step 2) that gives them the opportunity to make critical contributions to the discussion. The steps of the jigsaw and all the colored-paper-driven activities prepare them for that discussion.

I’m happy to share the resources shown here, talk through any points that are unclear, chat about how to adapt it to your learning outcomes – leave a comment, email me at peternewbury42 at gmail dot com, or hit me on Twitter @polarisdotca.

Target your feedback

The other day, I was talking about assessment that support learning in my teaching and learning class. Like I do often, I started the class with a “What do you notice? What do you wonder?” picture:

on target
What do you notice? What do you wonder?

The more I heard what my students noticed and wondered and more I thought about it, the more I like the analogy between learning archery and learning calculus or history or engineering or any other class at university.

Let’s Learn Archery!

(I’ve never shot an arrow, other than the usual bendy sticks and string thing that kids do during their summer holidays, so I could be totally *ahem* off-target here. If you know about archery, please, leave me a comment!)

Let’s suppose you want to learn archery. At first, the archery instructor will give you some direct instruction to get you to a level where you’re able to safely shoot an arrow in the general direction of the target. Now it’s your turn to practice and build your skills.

But imagine this: Imagine that the archery target is just the bull’s-eye. A little red circle, what, a couple of inches across, at the other end of the archery range. What kind of assessment and feedback would you get when you practice? You’d know when you did things 100% correct and hit the bull’s-eye. Otherwise, nothing. That would be frustrating and I suspect you’d give up. (Did you try Flappy Bird? And get angry and delete it? Yeah, like that.)

What’s so cool about a real archery target, then, is the instantaneous and formative feedback it gives you. When your arrow hits the target, you know immediately how you’re preforming (how close to bull’s-eye are you?) and, more importantly for learning, what you need to do to improve your aim. Hit up and to the left? Next time, aim more down and to the right.

You know what else is cool? It’s obvious and “well, d’uh, what else could it be?” that it’s you shooting the arrows, not the instructor. Sure, it would be extremely valuable to watch an expert, especially as you learn what to look for, but in the end, you have to do it yourself.

Let’s Learn Calculus!

After all that fun at the archery range, it’s time to head home. That calculus homework’s not going to do itself, you know.  Imagine the instructor gives you a list of questions to do each week (“all the even numbered questions at the end of Chapter 7”). You work through them,  and hand them in.

Mean Mode

The teaching assistants don’t have time to mark your homework thoroughly. The most they can do is look at your answers to Questions 4, 6 and 12 and give you a check mark or an X. What kind of assessment and feedback would you get from this? That you’re 100% correct on some questions, wrong on a few others, and nothing at all on the rest.

I’m not trying to pick on math. I’ve heard students say they only feedback they get on an essay is a letter grade on the front page.

How is anyone supposed to learn from that?

The feedback helps students learn calculus and history and whatever they’re studying depends critically on the discipline. Each field, each course has its own set of skills and/or attitudes. The instructor’s job is to help the students become more expert-like. There are some underlying patterns to the practice and formative assessment that support learning, though. These are drawn from Chapter 5 of a great book, How Learning Works, by Susan Ambrose et al. (2010):

  • practice needs to be goal-directed: everything the instructor asks students to do should support one or more of the course’s learning outcomes. If the assignment doesn’t, why are the students wasting their time on it?
  • practice needs to be productive: the students need to get something out of everything they do. Do they really need to answer twenty questions at the back of Chapter 7? What about 5 representative questions from Chapter 7, plus 4 questions from Chapter 6 and 3 questions from Chapter 5 so they also get some practice at retrieving previous concepts (like they’ll have to do on, say, the final exam!)
  • feedback needs to be timely: when do I need feedback on the aim of my arrow? Right now, before I shoot another one. Not in 2 weeks when the TAs have finally been able to finish marking all the papers and entered the grades.
  • feedback needs to be at an appropriate level: A checkmark, a letter grade, or only circling the spelling mistakes are not sufficient. Neither is referring the student to the proof of Fermat’s Last Theorem. A good rubric, for example, lets each student know what they’re acheiving and also what success looks like at this level.

Frequent productive, goal-directed practice with timely, formative feedback at an appropriate level. That’s what an archery target gives you. We need to find the target in each course we teach.

What does the target it look like in your course?

Learning Outcomes, Instruction, Assessment: Check, check, check

I’ve spent time in that circle of Hell called “marking” (or “grading” as they call it here in the U.S.) My past is filled with stacks of math exams full of multi-step problems and  astronomy exams with essays about the nature of science. The only respite from the drudgery of marking are the answers so absurdly incorrect it makes you laugh or the answers that are exactly what you’re looking for – check, check, check-check, check, check, perfect! 10/10.

Happily, I don’t have to mark exams anymore, but I still have an chance to get the tiny squirt of adrenaline that comes from assessing those “exactly what you’re looking for” answers.

I teach a teaching and learning course in the Center for Teaching Development at UCSD called The College Classroom to graduate students and postdocs. Their last assignment is a “microteaching experience.” Traditionally,  this involves developing a lesson for a class they might teach someday, delivering that lesson to their fellow students and then getting feedback from their peers and instructors. That’s is a good way to assess the ability to lecture, maybe even the ability to orchestrate some active learning into the lecture, but that’s still only one part of “teaching.” What about all the things that happen before class and after class?

Instead, we ask the students to create lesson plan for a 50- or 80-minute class. It should contain

  • learning outcomes
  • pre-class tasks like readings, watching videos, exploring websites with clear guidance about what to focus on
  • pre-reading quiz to assess the pre-class tasks
  • a skeleton of the lesson, including 3-5 peer instruction (“clicker”) questions but excluding the PowerPoint slides with all content – I don’t want them wasting their time making pretty slides they may never use
  • several assessment questions that could appear in homework or on the exam

For their presentation, we meet in small groups — me, the TA, and 3 of them — and I ask them to pretend they’re sitting in the coffee room with a few of their colleagues, describing this awesome lesson they’ve planned. They’ve got less than 10 minutes and they should assume everyone present knows the content and can concentrate on the pedagogy. (“In other words, don’t teach us the chemistry. Assume we know it. Tell us how you’ll teach it and why that’s a good approach.”)

When I assign the microteaching task a few weeks before the end of the course, I give them a lesson plan rubric (PDF). It’s using this rubric to assess their presentations that I get those “check, check, check-check, check” moments of satisfaction.

As an alumni of the Carl Wieman Science Education Initiative at the University of British Columbia, I adhere to Carl’s 3-pillared model of course design:

The CWSEI's 3-pillared approach to course design. (Image adapted from CWSEI by Peter Newbury CC-BY-NC)
The CWSEI’s 3-pillared approach to course design. (Image adapted from CWSEI by Peter Newbury CC-BY-NC)

Step 1.  Set the learning outcomes What should students learn? What should they be able to do to demonstrate their understanding and mastery of the concepts and skills? These outcomes are statements that complete the sentence, “By the end of this lesson/unit, you’ll  be able to…” and start with a nice, juicy verb selected from Bloom’s taxonomy of the cognitive domain.

Step 2. Decide how you’re going to teach What instructional approaches help students learn? What does the literature tell you about how people learn those skills and concepts? I’m a strong supporter of lecture…in 10-15 minute snippets, when the students are prepared to learn because you’ve primed them through student-centered activities like peer instruction, in-class worksheets and demonstrations, or pre-reading.

Step 3. Assessment What are students learning? Create formative and summative assessments that evaluate students’ mastery of the learning outcomes.

Two important things to notice about this approach:

  1. When it works, it works great. Here’s what I wanted students to know, here’s how I taught it, here’s what they did on the exam. Check, check, check.
  2. When it doesn’t work, it still works great. If students don’t perform like you’d hoped, the pillars help you diagnose the “failure mode,” as my engineering friends would say. Maybe it was a bad exam question that didn’t assess what you wanted to teach. Maybe you didn’t teach it in a way that helped them learn. Maybe you set an unrealistic learning outcome. In other words, you can re-trace through the course design cycle to find out what went wrong.

The College Classroom Microteaching Presentations

When the participants in The College Classroom present their lesson plans, it’s great when I can identify this nice, tight package of learning outcomes, instruction and assessment – check, check, check! I make sure I tell them, hoping that positive feedback will motivate them to do it again. As with marking exams, it’s the incomplete lessons that are difficult to assess. Fortunately, 3-pillared approach together with the rubric makes it easier for me to give targeted, goal-directed formative feedback.