It’s student evaluation time again—and I should be the last professor in the world to complain. With slight exceptions for “caring too much” and courses that meet “too early” (9:10 a.m.), my evaluationsarequitegood. And yet the student evaluations of teaching (SETs) I’ve received during my decade-long teaching career have meant absolutely nothing. This is because student evaluations are useless.
Ostensibly, SETs give us valuable feedback on our teaching effectiveness, factor importantly into our career trajectories, and provide accountability to the institution that employs us. None of this, however, is true.
First, evaluations promote sucking up to customers—I’m sorry, students—often at the expense of teaching effectiveness. A recent comprehensive study, for example, showed that professors get good evaluations by teaching to the test and being entertaining. Student learning hardly factors in, because (surprise) students are often poor judges of what will help them learn. (They are, instead, excellent judges of how to get an easy A.)
Asking students to evaluate their professors anonymously is like Trader Joe’s soliciting Yelp reviews from a shoplifter.
Indeed, some of the worst evaluations I ever got were for hands-down the best teaching I’ve ever done—which I measured by the revolutionary metric of “the students were way better at German walking out than they were walking in.” Alas, this took work, and some of the Kinder attempted to stage a mutiny on evaluation day. Little did they know that a “too much work” dig is the #humblebrag of the academy—and, indeed, anything less on evals is seen as panderingat best, and out-and-out grade-bribery at worst.
Speaking of grade bribery: Evaluations impact career trajectories, all right, but only of the most vulnerable faculty in the university—yes, adjuncts, whose semester-long contracts are often renewed (or not) on the basis of student feedback alone. Meanwhile, only in the rarest and most politicized cases do even scathing evaluations harm tenured big shots—who, unsurprisingly, often care about undergraduate teaching the least. In short, asking students to evaluate their professors anonymously is basically like Trader Joe’s soliciting Yelp reviews from a shoplifter.
I’m sorry—a bigoted shoplifter. Because student evaluations aren’t just useless: They’re biased. The other day I put a call out for notable evaluation stories, and the response was both overwhelmingly depressing and depressingly unsurprising:
Indeed, many evaluations, no matter who the professors were, focused on hair (and beards!), clothes, general disdain for the subject matter (“Philosophy sucks!”)—anything but constructive assessment of teaching. Seriously, though, anecdotal data notwithstanding: Bias in evaluations is widely accepted, so much so that some who use evals as assessment tools already control for it:
Just what I wanted, to be “pretty smart, for a girl.” Oh wait:
Because of all this—off-topic vitriol, irrelevance, bias—most tenure-track professors I know (who aren’t hanging onto their evals for dear life) don’t read their evaluations at all.
This isn’t to say that professors should be left solely to their own devices, a million poorly-dressed sovereign nations, left to declare “constructive” naptime during a Freud seminar, or emulate Wittgenstein’s turn as a schoolteacher and employ corporal punishment. Egad. Assessment of teaching is vitally important—but how can we actually do it so that it works?
Peer evaluations are a common suggestion (and, indeed, often common practice). But those only work if your peer actually cares about teaching in the first place—or doesn’t want to sabotage you. Outside reviewers (from other departments) could solve for this, but only if you underestimate the academic’s propensity toward petty vindictiveness: One bad review from English of a history professor, and we’ve got a permanent schism between two departments that should be clinging onto each other for survival.
All right, so what about “effectiveness measures” from the administration? Yes, let’s create even more administrators—what today’s universities need are more people who’ve never taught a day, highly invested in running departments on the cheap. All right, fine, how about we just test the students, and base professor effectiveness on the results? Sure, because that’s worked out so fantastically for K–12.
Or, OK, we could measure performance in subsequent classes—but many of us teach general ed, and our departments will never see those kids again. Measuring “good teaching” is a touchy, complicated subject, and all solutions involve both massive compromises in pedagogical autonomy and substantial amounts of “service work”—two of professors’ very favorite things.
I see two actual resolutions to the evaluation calamity. One of them is massively important and will never happen; the other is fairly trivial and could happen tomorrow.
The first: A complete cultural shift at doctoral-granting institutions about the importance and value of teaching. Damn near everyone with a doctorate learned to teach (or didn’t!) in an environment where undergraduate teaching is, to paraphrase Nietzsche, an affair of the rabble: Graduate TAs and adjuncts.
In grad school, I was actively told “not to care too much” about teaching—advice that is standard practice:
So again, the first, best and most important way to measure teaching effectiveness would be to create a culture at elite research institutions where the instruction of undergraduates actually matters. Fat chance.
So here’s another solution, almost breathtaking in its simplicity. Combine peer evaluative measures (of lesson plans and assignments, not just classroom charisma or test scores) with student evaluations—but make the students leave their names on the evals.
The day the first yahoo on Yahoowrote a comment was the day we should have stopped anonymous student evaluations dead. The “online disinhibition effect” both enables and encourages unethical, rash behavior, and today’s digital native students see no difference between evaluations and the abusive nonsense they read (and perhaps create) every day.
Actual constructive criticism can be delivered as it ought to be: to our faces. Any legitimate, substantive complaints can go to the chair or dean. There is no reason for anonymity—after all, we have no way to retaliate against a student for a nasty evaluation, because we can’t even see our evals until students’ grades have been handed in to the registrar (and if you hated us that much, you won’t take our class again). And besides, I hate to tell you this, but professors know handwriting; we recognize patterns of speech; we can glean the sources of grudges. We know who it was anyway.
Sure, this won’t change the culture of academia, where getting a position at a so-called “teaching college”—and thus spending all of your time with undergrads, as I now do—is considered abject failure. But it will certainly de-Yelp-ify the evaluation process, cut down on some of the bigotry, and it might even (gasp) offer us some constructive feedback. That’s a solution I evaluate at 4 out of 5 (“Agree”!)—which isn’t bad at all, for a girl.
Student Course Evaluations
This article was originally published in the Fall 2003 issue of the CFT’s newsletter, Teaching Forum.
by Anupama Balasubramanian
This column highlights concrete innovations and insights in teaching and learning across the Vanderbilt campus. In this issue, a Vanderbilt faculty member and teaching assistant discuss their perceptions of student course evaluations, and their strategies for reflecting on them and using them to improve their courses.
Kathleen Hoover-Dempsey is Chair and Associate Professor in the Department of Psychology and Human Development, as well as recipient of the university’s highest teaching honor, a Chair of Teaching Excellence. She is one of the pioneers of the Family-School Partnership Lab at Vanderbilt University, which is dedicated to the scientific investigation of the reciprocal relationships among families, schools and children. She teaches undergraduate courses in the child development major and is currently teaching a graduate level course in Educational Psychology.
How do you respond to your end-semester student evaluations?
With care and caring. I wait until the semester is well over and wait until I can sit down by myself and digest the information, particularly the student comments on the back of each form. I often move from that into trying to identifying themes in the comments. I look particularly for themes that I really need to do something about, especially things that might not have gone as well as anticipated in the eyes of the students. I also look for more generic advice that might help me, perhaps to do a better job balancing certain topics, or required projects, or midterms more effectively. After many years of teaching, I have a pretty good handle on the “rhythms” of courses, so I tend not to get much feedback on those issues. But I’m always looking for things that students identify as strengths of the course and what were weaknesses. I want to take steps to do something about the weaknesses; and work hard to figure out how I’m going to take them into account and address them the next time that I teach that course.
So would you say you give more credence to the comments than to the numerical ranking?
I actually find the numerical ratings on the form very helpful, too and I map them out across semesters. In my role as chair I do that for all my faculty, as well. That information gives me a good sense of trends and progress across semesters; I look especially for upward trends or ‘stalled’ areas. Overall, I think the numerical ratings are really important, but you often need to analyze students’ comments in order to remedy some of the concerns that may underlie lower ratings. That is the reason I also really look hard at the comments.
Have you changed or improved your teaching based on the feedback you have received, and if so, what are some concrete examples?
I definitely think I have improved my courses and my teaching over the years based on student comments and feedback. I have done this for so long (i.e. experimented with changes based on themes in comments), that it’s a little hard to identify particular examples at this point, but I’ll try.
I’ve certainly had comments about exams–for example, the balance of objective and subjective items on exams or preferred approaches to final examinations. Over the years, those comments have been so helpful that I don’t get many suggestions in those areas any more. (That’s one of the benefits of having taught for a very long time!) But certainly, there are often very useful comments about my approaches to a particular session or topic that I may well use in rethinking, for example, the balance of attention I’d give to theory, research and applications related to a particular topic. There are also comments that help me think well about effective ways of engaging students in active work with constructs during class sessions.
Because I sometimes teach large courses and have TAs, I also get very helpful feedback on more effective or efficient ways to engage my TAs in the course. When my TAs teach a session, we gather informal student evaluation comments, and my TAs find these very helpful. We also gather student evaluations for the TAs at the end of the course. These can be particularly helpful because they’re among the first formal comments that my graduate students have on their teaching and direct work with undergraduate students.
In addition, in the middle of the course (generally a week after I return the first midterm), I often ask students to give me a midterm evaluation of the course. These always include useful ideas that I can address while the course is in progress.
Is this mid-semester evaluation form something you create on your own?
Yes. Sometimes I write up an evaluation form; at other times I use copies of the regular evaluation forms. Generally, I tell my students that I want their feedback now so that I can do something about the things that aren’t working as well as they might be. I always come back to the class to tell them what I’ve learned from their evaluations and what I’m going to do about specific suggestions. On occasion, I get very mixed comments, like “You’re going way too fast” and “I love the pace” and “You could speed it up.” When I get mixed responses, I usually summarize their feedback and talk with them about the dilemma this presents for an instructor; I talk then how I plan to address the concerns and why.
Do you think that your students take the end-semester course evaluation forms seriously?
Yes, I do think my students take them very seriously. I think they do in part because I tell them that I take them seriously. I schedule a time when we’ll do the evaluation form; I tell students in advance that I consider it to be very important, and tell them that I really want them all to be present to evaluate the course. I tell them that I read every comment and find the comments extremely useful in thinking about and improving my own teaching. When I give the evaluations forms out I repeat all of those things, and add, “You can never write too much; I value all of the feedback I get, I do read it and it is very important to me.” And then I follow all of the university guidelines (like getting quickly out of the classroom after identifying who’s going to collect and return them to the department office.)
So yes, I get very substantive feedback, which I really value. In many courses, perhaps especially large ones, there is likely to be at least someone who’s not particularly happy with the course. Their feedback can be very, very helpful to thinking about what I might do differently in the course. I think emphasizing that we take student comments very seriously, and find them very helpful, simply increases the likelihood of getting very useful feedback from all students.
Scott Hicks is a graduate teaching assistant in the English Department and served as a Teaching Affiliate at the Center for Teaching in Summer 2003. He currently teaches a 100-level English composition course, which he designed and teaches on his own, under the auspices of the College Writing Program.
How do you respond to your end-semester student evaluations?
I take them really seriously because I think they play a large role in how we are viewed as TAs in our program. TAs are in a sort of an apprenticeship and are still learning the field and their profession, so I think they really do matter. But I do find that they vary in degree of helpfulness. Sometimes the students that I thought did not like the class loved the class, or a few that I thought loved the class maybe did not like it. So I sometimes find out from students that I can’t read how they really felt about it. But usually, the evaluations confirm what I already knew about the class.
The main things I look for in my student evaluations are: (1) The extent to which they feel they have been challenged and (2) Whether or not they found me to be a helpful and communicative teacher. In other words, I want to make sure my classes are always challenging, while still trying to make sure I meet their needs, including what they think they need. And I want to make sure I’m communicating what I expect, because if I don’t do that, they don’t know what to do and I don’t get from them what I need to get from them. When I get the evaluations, I read them but I really don’t stress out about them one way or the other, because I really cannot fix anything for that class. Instead, I can use them as an insight to stop and think about future classes. Since I design and teach my own courses, all the evaluations I get speak to me of things that I can control.
Have you changed or improved your teaching based on the feedback you have received, and if so, what are some concrete examples?
I think the most concrete thing I have changed is modifying the novels that I use based on their response to specific books that they liked or did not like. One semester I remember teaching a memoir that they hated and another dense novel that they did not understand. So in the future semesters, I spent more time on the dense novel, had them do more small group work, discussed more of the plot and the characters to make sure they had a firm foundation, and then got into themes. So in that sense I think it helped because I was able to get rid of something that they were not getting anything out of, and then focus on something that they could get a lot out of if I gave them more guidance and worked on it more in class.
At a less concrete level, one thing I have tried to work on is responding to students on their writing in my comments on papers and in conferences. One thing I always can do better is to communicate to them what I want them to do on paper and what I see as areas they need to improve on. Responding to students more articulately can help them see what they need to do and help me in my teaching. When I grade papers, I give them back written comments and then if we have a conference on it, I try to make sure I am always communicating as fairly and concisely as possible because they have a lot going on. So the evaluations help point things out from both ends: when they evaluate me I want them to be as clear and concise as possible, and when I evaluate them they want me to be the same.
Have you made any drastic changes to your teaching like adopting a totally different method based on the feedback you received?
I have not drastically changed anything because I feel I cannot teach from outside my personality and my core approaches to teaching. If I do anything too different from that, I feel I am not going to do well. What I have found, though, is that a lot of times the most important thing that I can do is to explain why we are doing something. For example, in the first semester I taught, we did a lot of group work, but in the second semester I did a better job of explaining why: when working in groups, better ideas are generated, shared, discussed and critiqued. When one explains to students why some things are done, they are more likely to understand the need for doing it and will not complain because it has a valid, useful purpose.
Any other suggestions for improving the evaluations teachers receive from students?
I think the biggest thing one can do is to let students know that you care about their academic performances as well as their personal lives. This doesn’t mean you have to be a therapist or get involved in their personal lives. Rather, if they see you caring about them as students and as people, their evaluations are going to be filtered through the sense that you are interested in both the class and in them.
In addition, I gather feedback in the middle of the semester. I have used the Center for Teaching’s Small Group Analysis (SGA), which was very helpful. I also have used an evaluation form in the middle of the semester, as well as the end-semester forms.
Is the evaluation form that you give mid-semester something you design?
Sometimes I use one of the forms from the Center for Teaching website. It is a question-short answer kind of a form, because when I do a ranked form, I don’t get the feedback I need to make improvements for the rest of the semester. With a short answer form I get really tangible suggestions like “I hate this novel” or “I did not understand that story,” and I can go back and answer some questions, or change my style of teaching. It is really useful to get this feedback at a point in the semester when something can be done about it.
So this form is primarily something you use on your own to improve your teaching or your course?
Yes. I try to see what students need and help them actually understand what is being done in class and get something out of it. I don’t want to find out about the gaps at the end of the semester when it is too late to fix things. Another good thing about doing the mid-semester evaluations or the small group analysis is that it shows students that you’re concerned about what they’re getting out the class. It is not at all an insult to me personally, but instead enables me to say to my class: “I am really concerned about how things are going and we are going to change these few things,” and then let them know that I want to help them out and change things and make them work. I increase the things that work, and cut out the things that don’t work. Doing so makes it easier for me to prepare for class, as well.