From: Mike Theall
Date: May 1 2006 - 9:08am
Tom said, “…there is as much mis-, mal- and non-feasance around evaluation of teaching and courses as ever…. I say we forget the flat Earthers and move on. They have never been convinced by research or reasoned argument. More of the same will be a similar waste of time.” I agree.
From the start, Jen Franklin & I stressed improving the practice of evaluation. In fact, our first major paper was about the knowledge and attitudes of ratings users. We did a number of more typical papers on ratings validity & related issues, but those were primarily a function of working on and needing to validate the eval & dev system we developed in our FIPSE grant in the late 80s.
In the more recent past, especially since about 2000-2001 when I started collaborating with Raoul Arreola, I have thought even more about the state of practice; about the closed-minded attitudes of so many; and about the effectiveness of the established research in terms of affecting practice. Not to say the research is weak … just the opposite … but the reality is that the researchers, as good as they were, most often spoke to each other and did not reach the wide audience of users. That’s what led to our development of the “meta-profession” model as a tool to help institutions, faculty, and administrators deal with the issues “on the ground”. Campus discussions and attempts to reach consensus about faculty work and performance expectations have to be the basis for evaluation and development policy & practice.
That’s why I put the emphasis on evaluation being “local” , on coupling eval & dev, and on examination of faculty work. Raoul, Jen, & I all come from ”systems” backgrounds and view evaluation from a macro perspective as well as a micro one. The larger view is critical because it demonstrates the need for well-articulated evaluation & development systems rather than haphazard process and ad hoc questionnaires. Our primary target audience is institutional administrators because they have the ability to put effective systems in place. As example, think of the differences between AERA and AAHE. At AERA, researchers talk to each other. AAHE succeeded because it talked to top-level administrators and got ‘buy in” on its initiatives and support for campus activities.
So, would a “7 Principles” approach make a difference? I can’t predict that it would, but it couldn’t hurt. There have been several attempts to disseminate guidelines before. Ken Doyle implied as much in his books in 76 & 83, as did John Centra in 1979. Braskamp, Brandenberg & Ory list 12 “considerations” for evaluation and 5 for development in their 1984 book. The second books from both Centra (93) and Braskamp & Ory (94) reinforced their broader views of good practice. McKnight had a 14-point list back in 1984. Dick Miller’s second book had a 10-point list in 1987. Pete Seldin has a chapter in building a system in his 1999 book and it has guidelines as well. The most specific applied process is Arreola’s “8-Step” approach (in his 2 editions of ‘Developing a comprehensive eval system’, 95’ & 00’) that results in a “source-impact matrix” that specifies and prioritizes what will be evaluated, by whom, using what methods. That will be reinforced in the third edition, coming out this summer, and that book will have an extended description of the ‘meta-profession’ and its application to evaluation & development.
The guidelines I went over in the first webcast incorporate this work and add my own twists. A shorter list could be taken pretty much directly out of those guidelines. So it’s not that there haven’t been attempts to disseminate something like a ‘7 Principles’. The differences have been in the marketing and politics of these ideas. Three factors seem important: 1) there has been no major organizational backing for published evaluation principles; 2) there has not been coverage of evaluation issues the equivalent of the press surrounding C & G’s publication of their principles; and 3) principles for good undergraduate education do not threaten anyone, whereas principles for good evaluation will always be threatening.
Witness what we see most in the press (e.g., the Chronicle as prime offender): ill-informed stories about ratings controversies that paint the occasional negative study (e.g. Williams & Ceci) as being equal in weight to well established research. There is no equivalent history of casting doubt on the research that underlies the 7 Principles. The reality is that few will take issue with mom & apple pie statements about effective education. Not to dismiss the Principles, but check out any institutional mission statement to find obsequies in all the appropriate directions. It’s easy and non-threatening to render lip service to these statements, but to accept evaluation (especially by those who haven’t reached our pinnacle of intellectual authority and disciplinary expertise & stature) is another story. There are valid & reliable ways to do evaluation well. Agreeing to the principles might mean actually having to do something and it seems to me that a lot of objection to ratings comes from objection to the very notion of being evaluated. I suspect they would take issue with almost any form of evaluation of teaching.
Well, enough of my cynicism. The bottom line is that it would be a matter of minutes to develop a set of principles. The wide publication of these principles, particularly if offered collaboratively by one or more professional organizations could have an impact. I could get POD and the AERA SIG on Faculty Teaching, Evaluation & Development to sign on and we have contacts in the other organizations that would be relevant. Any attempt to improve practice is a step in the right direction. We should probably do something like this & I am very willing to take part.