Translate to multiple languages

Subscribe to my Email updates
Enjoy what you've read, make sure you subscribe to my Email Updates

Monday, April 17, 2017

Student Evaluations Ratings of Teaching: What Every Instructor Should Know | AMS Blog - On Teaching and Learning Mathematics

Photo: Jacqueline Dewar
Jacqueline Dewar, Professor Emerita of Mathematics argues, "What happens to the data from your teaching evaluations? Who sees the data? Are your numbers compared with other data? What interpretations or conclusions result?  How well informed is everyone, including you, about the limitations of this data, and conditions that should be satisfied before it is used in evaluating teaching?

Despite many shortcomings of student ratings of teaching (SRT), some of which I mention below, their use is likely to continue indefinitely because the data is easy to collect, and gathering it requires little time on the part of students or faculty. I refer to them as student ratings, not evaluations, because “evaluation” indicates that a judgment of value or worth has been made (by the students), while “ratings” denote data that need interpretation (by the faculty member, colleagues, or administrators) (Benton & Cashin, 2011).

Readers may be asked to interpret the data from their SRT on their annual reviews or in their applications for tenure or promotion. They may even find themselves on committees charged with reviewing the overall teaching evaluation process or the particular form that students use at their institutions, as I did.  For these reasons, I thought it might be helpful to discuss some general issues concerning SRT and then present a few practical guidelines for using and interpreting SRT data.

My career as a mathematics professor spanned four decades (1973-2013) at Loyola Marymount University, a comprehensive private institution in Los Angeles.  During that time, my teaching was assessed each semester by student “evaluations.” For nearly all of those 40 years this was the only method used on a regular basis. If there were student complaints, a classroom observation by a senior faculty member might take place, which happened to me once as an untenured faculty member.  Later on, as a senior faculty member, I myself was called upon to perform a few classroom observations.

During 2006–2011, I also directed a number of faculty development programs on campus, including the Center for Teaching Excellence.  In that role, I served as a resource person to a Faculty Senate committee appointed in 2010 to develop a comprehensive system for evaluating teaching.  Prior to that, I had participated in a successful faculty-led effort to revise the form students used to rate our teaching, and I worked to develop and disseminate guidelines about how that data should be interpreted. During that two-year process (2007-2009), I discovered that my colleagues and I, and even faculty developers on other campuses, had a lot to learn about the limitations of this data (Dewar, 2011).

Because teaching is such a complex and multi-faceted task, its evaluation requires the use of multiple measures. Classroom observations, peer review of teaching materials (syllabus, exams, assignments, etc.), course portfolios, student interviews (group or individual), and alumni surveys are other measures that could be employed (Arreola, 2007; Chism, 2007; Seldin, 2004). In practice, SRT are the most commonly used measure (Seldin, 1999) and, frequently, the primary measure (Ellis, Deshler, & Speer, 2016; Loeher, 2006).  Even worse, “many institutions reduce their assessment of the complex task of teaching to data from one or two questions” (Fink, 2008, p. 4).

Source: AMS Blog - On Teaching and Learning Mathematics
On Teaching and Learning Mathematics
On Teaching and Learning Mathematics