Thursday, May 11, 2006

Ratings Drop as Consequence of Shift in Course Design Toward Active Learning? - Steve Ehrmann 4-28-2006

"... I've heard this story many times before: instructor shifts to a design that involves more active learning, collaboration, and student responsibility. Instructor ratings drop, sometimes to a career threatening degree."
From: Steve Ehrmann

Date: Apr 28 2006 - 9:27am


I was the person (or at least one of them) who asked the question Steve cites at the beginning of his email. Here are some additional facts:
* the two physics courses have very different designs. The earlier, traditional design features a single lecturer, and about 800 students (two lecture sections; lots of discussions sections taught by others). The use of only one lecturer (in a department of 80+ faculty) makes it more feasible for the department to select superior lecturers, and student ratings are often high.
* One faculty member (who had taught the course some years ago and had gotten high ratings) didn't like the learning outcomes and redesigned the course. In the new design, students are taught in groups of 100 (each with a different instructor, plus TAs), using discussion, experiments, clickers, etc. I THINK his ratings went down. I'm sure that the ratings of the average instructor (8 of them) were lower than the ratings of the single lecturer in the old design.
* I know the learning outcomes (conceptual understanding) improved dramatically, relative to the old design.
* a highly rated instructor in the same department claims that a) the department doesn't have enough good teachers to staff so many sections of the redesigned course so that the average rating of the eight course leaders will be significantly lower than the rating of the single lecturer in the old design, b) he infers that lower instructor ratings mean that students in this introductory course are less likely to want to learn physics in the future. It's better to have more affection for physics than better learning outcomes, he said. Whether you agree or not, this is consistent with what Mike said in the workshop - whatever SCE [Student Course Evaluation] measures, it's only a partial measure of how good the course was.
* My guess is that many faculty in this research-oriented department of 80 faculty would agree that most of them could not do either model well (lecture freshmen; guide on the side for freshmen), that some could teach comfortably in both ways, and that some could do only one well.

I've heard this story many times before: instructor shifts to a design that involves more active learning, collaboration, and student responsibility. Instructor ratings drop, sometimes to a career threatening degree. I don't know how often this drop in SCE scores is a measure of decreased satisfaction (the instructor isn't working as hard for me? I'm not comfortable learning this way?) and to what extent it is an artifact -- the change in pedagogy created a mismatch with the questions on the form (which perhaps were biased toward questions about lecturing - the instructor is lecturing less so scores go down, even for students who are satisfied with the course and learning well).

Steve

**********
Steve Ehrmann (ehrmann@tltgroup.org)
The TLT Group
301-270-8311
Blog: http://jade.mcli.dist.maricopa.edu/2steves/

56 comments:

Anonymous said...

In a post of 2 May 2004 titled "Re: Is success dependent on technique - Hawthorne Effect" [Hake (2004)], I wrote [bracketed by lines "HHHHH. . ."; see that post for references other than Hake (1998a,b)]:

HHHHHHHHHHHHHHHHHHHHHHHHHH
When I first started teaching "P201," the large-enrollment non-calculus-based introductory course for science (but not physics) majors at Indiana University, I was just in off the industrial-research-lab street. Following the example of the several faculty in our department who had won university teaching awards, I taught P201 in a fairly traditional manner - passive student lectures (but with lots of exciting demos), algorithmic problem exams, recipe labs, and a relatively easy final exam - a 77% average in my first P201 course.

In that course I was gratified to receive a Student Evaluation of Teaching (SET) evaluation point average EPA = 3.38 [on a scale of 1 - 4 (B plus)] for "overall evaluation of professor," with many glowing student comments about my WONDERFUL teaching. I have little doubt that had I continued using traditional methods and giving easy exams I would have risen to become the U.S. Secretary of Education, or at least president of Indiana University.

Unfortunately for my academic career, I gradually caught on to the fact that students' conceptual understanding of physics was not substantively increased by traditional pedagogy. As described in Hake (1987, 1991, 1992, 2002c) and Tobias & Hake (1988), I converted to the "Arons Advocated Method" [Hake (2004a)] of "interactive engagement." This resulted in average normalized gains g on the "Mechanics Diagnostic" test or "Force Concept Inventory" that ranged from 0.54 to 0.65 [Hake (1998b), Table 1c] as compared to the g of about 0.2 typically obtained in traditional introductory mechanics courses [Hake (1998a)].

But my EPA's for "overall evaluation of professor," sometimes dipped to as low as 1.67 (C-), and never returned to the 3.38 high that I had garnered by using traditional ineffective methods of introductory physics instruction. My department chair and his executive committee, convinced by the likes of Peter Cohen (1981, 1990) that SET's are valid measures of the cognitive impact of introductory courses, took a very dim view of both my teaching and my educational activities.
HHHHHHHHHHHHHHHHHHHHHHHHHH

I realize that Mike Theall will regard the above anecdote as just one more mole-hill example of a SET miscarrage, completly insignificant in comparison to the MOUNTAIN of evidence that SET's ARE valid measures of the cognitive impact of introductory courses.

But regarding Peter Cohen’s oft-quoted mountain of evidence, in Hake (2005) I wrote [see that article for the references other that Hake (2002)]:

HHHHHHHHHHHHHHHHHHHHHHHHHH
Investigation of the extent to which a paradigm shift from teaching to learning . . . [Barr & Tagg (1995)]. . . is taking place requires measurement of students' learning in college classrooms. But Wilbert McKeachie (1987) has pointed out that the time-honored gauge of student learning - COURSE EXAMS AND FINAL GRADES - TYPICALLY MEASURES LOWER-LEVEL EDUCATIONAL OBJECTIVES such as memory of facts and definitions rather than higher-level outcomes such as critical thinking and problem solving. The same criticism (Hake 2002) as to assessing only lower-level learning applies to Student Evaluations of Teaching (SET's), since their primary justification as measures of student learning appears to lie in the modest correlation with overall ratings of course (+ 0.47) and instructor (+ 0.43) with "achievement" AS MEASURED BY COURSE EXAMS OR FINAL GRADES (Cohen 1981).
HHHHHHHHHHHHHHHHHHHHHHHHHH

But the above argument that:

SET'S *DO NOT* AFFORD VALID EVIDENCE
THAT TEACHING LEADS TO STUDENT HIGHER-LEVEL LEARNING." .......... (CLAIM #1)

has been generally ignored by the psychology/education/psychometric community, many of whom, afflicted by irrational pre/post paranoia, dismiss the claim that

PRE/POST TESTING *DOES*
AFFORD VALID EVIDENCE ON
THE EXTENT OF STUDENT
HIGHER-LEVEL LEARNING ............(CLAIM #2)

despite the solid evidence in its favor [see e.g., Hake (1998a,b; 2005; 2006a,b).

In his contribution to the present thread, Tom Angelo wrote on 28 April 2006:

"Despite the assessment ‘movement’ and the ‘learning paradigm shift’, student evaluations alone are all that most colleges and universities use (in any real way) and all, I have come to believe, they ever will use in the foreseeable future to evaluate the quality and effectiveness of teaching and learning. My experience is that, despite the all the good research, writing and consulting of Mike [Theall] and his colleagues, there is as much mis-, mal- and non-feasance around evaluation of teaching and courses as ever -- and that THE SYSTEMS IN MANY INSTITUTIONS PROBABLY INHIBIT CHANGE AND IMPROVEMENT MORE THAN THEY SUPPORT THEM. (my CAPS). . . . I say we forget the flat Earthers and move on. They have never been convinced by research or reasoned argument. More of the same will be a similar waste of time."

I agree. For a recent example of a waste of time in futile tilting against the misuse of SET’s, see my response to Mike Theall in "SET's Are Not Valid Gauges of Teaching Performance #4" [Hake (2006c)].

Richard Hake, Emeritus Professor of Physics, Indiana University
24245 Hatteras Street, Woodland Hills, CA 91367
rrhake@earthlink.net
http://www.physics.indiana.edu/~hake
http://www.physics.indiana.edu/~sdi

REFERENCES
Hake, R.R. 1998a. "Interactive-engagement vs traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses," Am. J. Phys. 66: 64-74; online at
http://www.physics.indiana.edu/~sdi/ajpv3i.pdf (84 kB).

Hake, R.R. 1998b. "Interactive-engagement methods in introductory mechanics courses," online at http://www.physics.indiana.edu/~sdi/IEM-2b.pdf (108 kB) - a crucial companion paper to Hake (1998a).

Hake, R.R. 2002. "Re: Problems with Student Evaluations: Is Assessment the Remedy?" online at
http://www.physics.indiana.edu/~hake/AssessTheRem1.pdf (72 kB).

Hake, R.R. 2004. "Re: Is success dependent on technique - Hawthorne Effect." online at http://listserv.nd.edu/cgi-bin/wa?A2=ind0405&L=pod&P=R442&I=-3. Post of 2 May 2004 22:08:12-0700 to AERA-J, ASSESS, Biopi-L, Chemed-L, EvalTalk, Physhare, Phys-L, POD, & STLHE-L.

Hake, R. R. 2005. "The Physics Education Reform Effort: A Possible Model for Higher Education," online at
http://www.physics.indiana.edu/~hake/NTLF42.pdf (100 kB). This is a slightly edited version of an article that was (a) published in the National Teaching and Learning Forum 15(1), December 2005, online to subscribers at
http://www.ntlf.com/FTPSite/issues/v15n1/physics.htm, and (b) disseminated by the Tomorrow's Professor list
http://ctl.stanford.edu/Tomprof/postings.html as Msg. 698 on 14 Feb 2006.

Hake, R.R. 2006a. "Should We Measure Change? YES!" online at
http://listserv.nd.edu/cgi-bin/wa?A2=ind0603&L=pod&P=R17226&I=-3 . Post of 24 Mar 2006 10:49:00-0800 to AERA-C, AERA-D, AERA-J, AERA-L, ASSESS, ARN-L, EDDRA, EvalTalk, EdStat, MULTILEVEL, PsychTeacher (rejected), PhysLrnR, POD, SEMNET, STLHE-L, TeachingEdPsych, & TIPS.

Hake, R.R. 2006b. "Possible Palliatives for Paralyzing Pre/Post Paranoia," online at http://listserv.nd.edu/cgi-bin/wa?A2=ind0606&L=pod&F=&S=&P=3851, Post of 6 Jun 2006 to AERA-D, ASSESS, EvalTalk, PhysLrnR, and POD. Abstract only sent to AERA-A, AERA-B, AERA-C, AERA-J, AERA-L, Biolab, Biopi-L, Chemed-L, EdStat, IFETS, ITFORUM, RUME, Phys-L, Physhare, PsychTeacher (rejected), TeachingEdPsych, & TIPS.

Hake, R.R. 2006c. "SET's Are Not Valid Gauges of Teaching Performance #4," online at http://listserv.nd.edu/cgi-bin/wa?A2=ind0606&L=pod&O=D&P=17968. Post of 27 Jun 2006 20:58:34 -0700 to AERA-D, AERA-L, ASSESS, EdStat, EvalTalk, PhysLrnR, and POD. ABSTRACT: I respond in order to 6 points made by Michael Theall in his response to my posts "SET's Are Not Valid Gauges of Teaching Performance."

Anonymous said...

uLlHph The best blog you have!

Anonymous said...

YtY5le Good job!

Anonymous said...

Nice Article.

Anonymous said...

Hello all!

Anonymous said...

Please write anything else!

Anonymous said...

Nice Article.

Anonymous said...

Please write anything else!

Anonymous said...

actually, that's brilliant. Thank you. I'm going to pass that on to a couple of people.

Anonymous said...

Thanks to author.

Anonymous said...

actually, that's brilliant. Thank you. I'm going to pass that on to a couple of people.

Anonymous said...

H9HriT write more, thanks.

Anonymous said...

Nice Article.

Anonymous said...

Please write anything else!

Anonymous said...

Please write anything else!

Anonymous said...

Wonderful blog.

Anonymous said...

Magnific!

Anonymous said...

Good job!

Anonymous said...

Wonderful blog.

Anonymous said...

Magnific!

Anonymous said...

Nice Article.

Anonymous said...

Hello all!

Anonymous said...

Thanks to author.

Anonymous said...

Please write anything else!

Anonymous said...

Energizer Bunny Arrested! Charged with battery.

Anonymous said...

What is a free gift ? Aren't all gifts free?

Anonymous said...

Change is inevitable, except from a vending machine.

Anonymous said...

Save the whales, collect the whole set

Anonymous said...

When there's a will, I want to be in it.

Anonymous said...

A lot of people mistake a short memory for a clear conscience.

Anonymous said...

Ever notice how fast Windows runs? Neither did I.

Anonymous said...

A lot of people mistake a short memory for a clear conscience.

Anonymous said...

Build a watch in 179 easy steps - by C. Forsberg.

Anonymous said...

Change is inevitable, except from a vending machine.

Anonymous said...

What is a free gift ? Aren't all gifts free?

Anonymous said...

Oops. My brain just hit a bad sector.

Anonymous said...

Friends help you move. Real friends help you move bodies.

Anonymous said...

Friends help you move. Real friends help you move bodies

Anonymous said...

Friends help you move. Real friends help you move bodies.

Anonymous said...

Beam me aboard, Scotty..... Sure. Will a 2x10 do?

Anonymous said...

Hello all!

Anonymous said...

All generalizations are false, including this one.

Anonymous said...

Build a watch in 179 easy steps - by C. Forsberg.

Anonymous said...

When there's a will, I want to be in it.

Anonymous said...

What is a free gift ? Aren't all gifts free?

Anonymous said...

Give me ambiguity or give me something else.

Anonymous said...

Lottery: A tax on people who are bad at math.

Anonymous said...

When there's a will, I want to be in it.

Anonymous said...

Magnific!

Anonymous said...

What is a free gift ? Aren't all gifts free?

Anonymous said...

The gene pool could use a little chlorine.

Anonymous said...

Give me ambiguity or give me something else.

Anonymous said...

Clap on! , Clap off! clap@#&$NO CARRIER

Anonymous said...

Energizer Bunny Arrested! Charged with battery.

Anonymous said...

All generalizations are false, including this one.

Anonymous said...

Ever notice how fast Windows runs? Neither did I.