Student Evaluation of Teaching (College)

I’m an undergrad student and as the term draws to a close my inbox has been getting blown up daily with emails asking to complete my instructor evaluations. My instructors have also mentioned them in class, although they’ve never said what they’re used for. I generally make a point of earnestly filling them out, including a paragraph or two in the “narrative” section where I can type out my own thoughts. Most of the questions ask for a 1-5 rating on things like “was the instructor available for help on course content” or “was the instructor prepared for class” or “was the grading system sensible.

It was the same at the community college I transferred from. About the only difference is that here, I can choose to have my evaluation sent to the instructor’s supervisor, although it will no longer be anonymous if I do so.

My question (especially to current and former college instructors) is if these student evaluations are helpful. Do you use them to help shape your curriculum? Are they used in performance evaluations? Is there something you’d like to see more of (or less of) in the evaluations? Most instructors I only see in one course and then never again, although I had the fortune once of having the same instructor for my entire calculus series. Although I was just one student out of ~40 per term, it did seem like perhaps he was slowly adjusting things as if he was minding my feedback. Hard to be sure, though.

One instructor I had offered a small amount of extra credit if she received a 75% or better response rate from the class (i.e. at least 75% of students complete the evaluation). I’ve had one instructor state that he ignores the evaluations. I’m interested in your thoughts.

I teach at a university while I’m doing my PhD. I definitely look at my evaluations. We can tailor ours, so I ask questions about areas where I think I might need improvement. I read all of the comments.

I usually follow the comments to the best of my ability. I teach an advanced composition course, so feedback saying, “too much writing” isn’t really something I can do anything about. But something like, “covers things too quickly in class” I can do something about. Basically, anything I have the agency to change, I’ll adjust based on aggregate feedback to the best of my ability.

I’d like to see more thoughtful feedback about my ability to communicate concepts. It’s something I worry about, but it’s a hard thing to ask directly.

Something I get sometimes get comments on that I have yet to fix is that my slides aren’t visually interesting. I’m an extremely non-visual person, and I know any attempts I make to make them more interesting will probably give my students eye cancer. More specific feedback there would be helpful.

Also? I’ve only had two people ever be mean. I still remember those comments. They hurt.

They are occasionally helpful (usually if I want to get some student feedback on a specific element of the course that is new) and often completely useless. If you want to make them more useful for your professors, focus your comments on stuff that is relevant, reasonably objective, and within the professor’s power to change. An example of a useful comment is something like “It would help if the due date for the final paper proposals was a week earlier so students have enough time to request materials through interlibrary loan and read them thoroughly.” This alerts the professor to a problem that might not be visible from their point of view.

Examples of unhelpful comments are anything about the professor’s hairstyle, voice, annoying mannerisms, fashion choices, and other essentially frivolous topics, as well as anything that is not within the professor’s power to change (like whether the classroom is too hot or too cold).

Comments that reflect the student’s personal preferences about how the course is taught MAY be helpful, but are often balanced by an equal and opposite perspective – for every “too much discussion, I want more lecture” there’s a “too much lecture, I want more discussion” from the exact same class.

I’m glad to hear the incidence of mean comments is fairly low. At this university, evaluations are closed before finals week so that might help reduce capricious remarks.

I have occasionally left remarks about the classroom itself, although I’d preface them by saying they weren’t a factor in instructor evaluations. My hope was that the instructor had some means of communicating to the powers that be to try to pick a different classroom the next time they scheduled the course. For example, a discussion-based class being held in a lecture hall as opposed to a smaller room.

Are comments about TAs appropriate? Some of my courses had TAs that were below average (imo) and I made mention of this. I don’t know if it’s the right avenue for that sort of thing though.

Mostly I found them not helpful.

A lot of complaints were just generic stuff that could not be avoided. The class topic was the class topic. The profs know a lot more than the students about what was really going to help them further down the line. (In Computer Science our goal wasn’t to make sure you got a job right out of college. It was to help you get a better job 10, 20, 30 years later.)

And some stuff have no good fixes. I’m right handed. I write on the board facing it. That means I’m standing in front of what I’m writing. So, I tried to remember to stand aside when done, etc. But I’m not able to make myself invisible.

I suggested to students that instead of “Don’t do X.” in evaluations to try “Do Y instead of X.” type things. That basically didn’t happen.

(One thing to remember: The Teacher of the Year Award was called “Kiss of Death Award” among faculty. And it was eerily true. Twice the college’s TotY, who happened to come from our dept., got canned at the end of the year it was awarded. Two for freaking two. I never tracked how the TotY winners from other depts. did. I never saw anyone get fired for being a bad teacher.)

At the places I’ve taught, those instructor evaluations were always seen by both the instructors themselves (well after the semester was over and grades were assigned) and their supervisors (like department chairs or academic deans). So your questions about what they’re used for or whether they’re helpful deserve two kinds of answers.

#1: For the instructor him/herself:

The positive ratings and comments may not be “helpful” in the sense of providing specific advice, but they sure do make me feel good, and that’s helpful in its own way, both for the general encouragement that helps keep me going, and the knowing that I was doing something right—that the way I was teaching was working, at least for some students.

The negative comments, suggestions, and constructive (or otherwise) criticism, I’ve learned to take with a grain of salt. It’s not unusual for two people from the same class to say the opposite of each other (like “He’s good at responding to students’ questions” and “He doesn’t welcome students’ questions”). But any comments that seem to have been made honestly and in good faith (or any 1-5 rating items that seem to have been rated low by a substantial number of students), I’ll at least consider whether I need to change my approach, or try to do better, or at least try to explain why I do things the way I do, the next time around.

#2: For the instructor’s supervisors:

They do get looked at, or at least they’re supposed to—when I was a department chair, I was supposed to look over all my faculty’s evaluations. I assume that what’s done with them, and how seriously they’re taken or heavily they’re weighted, can vary quite a bit from one institution, department, or individual supervisor to another. They can sometimes factor into decisions about hiring, tenure, or promotion for the faculty member. If there are complaints about a class or an instructor, those evaluations can help to show whether the complaints are isolated and possibly unfounded, or part of a pattern. Looking at an instructor’s evaluations may help his/her supervisor see if there are issues that need to be addressed with that instructor or that class, especially if there is a pattern of a significant number of students saying much the same thing. They may show the supervisor that that instructor is better at teaching some classes than others.

In my experience the way to get high ratings was to give an easy midterm (the ratings were done before the finals). I cannot prove causation but there is a strong correlation between the rise of course ratings and grade inflation. There were no evaluations when I was a student and once they became common, marks started to rise. Administrators love them, which is perhaps the worst thing I could say about them.

I’m a college instructor. I look forward to reading the evaluations every semester. I’m always looking for feedback and for how to improve my teaching methods. Am I going too fast? Too slow? Too much homework or not enough? Too much theory or not enough case studies? I especially value comments about what I’m doing that is helpful, and what I’m failing to do that would be helpful.

And when I have my annual review with my supervisor, we always discuss my evaluations. So they do make a difference. I would strongly encourage all students to provide as much feedback as they can. You probably won’t get anybody fired, or get anybody a raise, but you might help to nudge someone’s teaching style in a positive direction.

I know someone who was fired twice immediately after being Teacher of the Year (nominated strictly by students)

The reasons were obvious to me (I’m not a teacher or student just a friend of her husband).

She was totally ignoring the teaching methods prescribed by the school and administrators do not like that!

She spent a month or so before the standardized tests in extreme test prep mode. Her students passed the test at a rate unheard of in those schools (very poor, many ELL). This was in HS and the students couldn’t graduate without passing this test. The administration was more interested in getting rid of this requirement.

She spent a lot of time 1-on-1 with students who were talented. The school regarded this as an unsafe practice both for her and the school’s liability.

After the second time she was “fired” (just not have your contract renewed) she went to a private school and a few years later to a public “exam” school where students were highly motivated and administration was less rigid and more results oriented.

Side note: Lets say a class starts off a semester with 30 students. Ten drop out in around the first 2 weeks or so. Others drop out later on.

Lets say of those original 30 only 15 finish at semester. Those students fill out evaluations. They either had no problem with the course or stuck it out but the thing is the administration needs to know why those other 15 dropped out yet they dont fill out evaluations.

Now often the reason for dropping a class could be as simple as a schedule issue but some students honestly do not understand the teacher or are having major problems understanding the material.

Final note: I had this one computer course where there were several foreign students. I noticed they never asked questions in class and when I talked to them and asked if they understood the material they said no. They didnt understand much in class. They only wrote down notes. Afterwards though they got together with an older person who mentored them and taught them so in reality, they were learning little from the actual class and almost all from the mentor.

I wonder what they wrote on evaluations?

There’s a good chance that the procedure for dropping a class includes an opportunity to state why they’re dropping.

At my university, after the first two weeks, a reason is required. At that point, a student also can’t drop a class without their advisor and the registrar facilitating it.

Another question that I eluded to above, since many college classes have alot of international students, have they looked into if they fill these out or if they understand them? I know I went to college over 20 years ago but I remember some foreign students were from a culture where one didnt ask questions and the students figured things out on their own.

So do they fill these out and do they understand them in order to give proper feedback?

Why is that? I would think they would want to hold on to the more popular teachers as long as they were not doing something wrong like making the class too easy.

I administer the student evaluations system at the college where I work. It’s ultimately a problematic way to get feedback, but no one has come up with a better idea.

One problem is, what constitutes a good teacher? Someone who gives easy grades? Someone who’s rigorous, but is expert on the subject and getting the message across? Someone whose classes are entertaining?

Also, if someone is about to get a poor grade, might they rate the teacher lower than if they know they’re doing well?* There’s also the issue of how good a sample is it? We do get a very high rate of replies, but how well do they represent the students (i.e., if someone doesn’t like the instructor, might they just not fill it out?)?

Evaluations are used in tenure decisions, but I’m pretty sure they’re a minor factor. But non-tenured faculty are very concerned about their ratings.

What they are more useful for is determining general trends. If one department consistently gets lower ratings, it might mean there’s an issue. If the ratings are dropping over time for a department, why is that?

(It also doesn’t help that it was years before we had a consistent set of questions, which led to all sorts of problems comparing apples to basketballs.)
*Evaluations end before the final grades are given, and released to faculty when they’ve submitted their grades to reduce the chance of grading affecting the ratings.

One explanation of this phenomenon is that it’s not that they don’t want good teachers, but that they don’t care, and if you are putting time into being a great teacher, you likely aren’t putting time into the things they do care about (research and grant-writing). The students may think ‘wow, this lady is amazing’, because they don’t know or care how much funding she’s secured. But the school only worries about the later.

I read them and learn a lot but it can be a pretty emotional punch in the gut. It’s hard when you work so hard at something. The attitude of the students is probably the biggest determinant. A positive student will generally be positive about the class and a negative student will sometimes be very negative about the class and sometimes you personally. I would say that most of the time when you see an issue being repeatedly brought up from a wide selection of students it’s usually something you are already aware of but haven’t had the time or ability to change. Seeing those things helps to focus your energy.

There’s actually a good deal of evidence that student evaluations of faculty are not only unreliable, but especially unreliable (and especially unfair) to female faculty and faculty of color.

Here’s one such study https://www.insidehighered.com/news/2016/01/11/new-analysis-offers-more-evidence-against-student-evaluations-teaching

Even so (as a white male with over 30 years of college teaching and administration), I pay a lot of attention to them in my own classes and do make changes based on what I read. I also try to always do a kind of mid-term evaluation (although I shape it differently) when there’s still time to make changes. But based on these recent studies, I’ve stopped giving student evaluations much credibility when appointing or promoting junior faculty.

Has anyone evaluated how international students give feedback vs. American students? Do you feel there is a difference?

I don’t know of any data about that question, but in my experience (anecdata) there don’t seem to be any generalizable differences (especially because the evaluations are anonymous–although I can often tell who wrote what).