How do some people graduate from college without understanding the fundamentals of their major?

I don’t know what part of the world you’re in, but in the US, student evaluations of teaching have been standard for at least 30 years, and probably longer (my personal experience begins with my freshman year of college in 1994, but from what I understand, the opportunity to evaluate faculty was one of the things student protestors demanded back in the 1960s, and by the 1980s, administrators had figured out that looking at student evaluation forms was much, much easier than measuring teaching quality in any other way). The degree to which they factor into decisions about hiring, retention, and tenure varies by institution, but most colleges and universities give them at least some weight.

Typically, it works like this: In the old days, instructors distributed pencil-and-paper evaluation forms on the last day of class and then left the room, delegating a student volutneer to collect them and bring them to the administrative office that handles such things. Nowadays, students usually receive a link to the evaluation form through their campus e-mail or the course’s learning management system (a sort of web portal accessible only to students in the class). Either way, there’s typically an array of standardized multiple choice items (for example, “rate, on a scale of one to five, how well the assignments in this course improved your understanding of the subject”), plus some optional space for free-form comments about what the student liked or disliked about the course (and yes, if students choose to respond in detail to the last category of questions, it’s often easy to tell who wrote what, but they only thing they HAVE to respond to are the multiple choice items). The instructor is not given access to the evaluation forms until after final grades have been submitted.

And yes, there are all sorts of problems and biases with this system, not least of which (in addition to the ones you’ve pointed out) is that students often rate instructors up or down for factors that have nothing to do with the quality of their teaching, such as personal charisma or having a foreign accent. Nevertheless, they can be a useful tool if they’re read with an awareness of these potential biases and looked at alongside other evidence, with an eye to patterns rather than giving weight to a single set of comments, which is what a good department chair will do.

Yes, they are, in the sense that the students doing the evaluations don’t put their names on them.

If many students make the same complaint, or rate an instructor low on the same criteria, that has more credibility than anything on any one individual student’s evaluation.

I’m in the US but my education predates the times you reference. There were no such mechanisms when I was in school. And, the school was far more interested in maintaining its reputation as “one of the best” than catering to the whims of students (who would likely strike out as a result of their own poor performances)

So, they could easily NOT be anonymous as its easy to generate recipient-specific links that allow you to determine the source of the reply (assuming you ignored the email/HTTP headers)

But, simply knowing that they are being evaluated in a trivial way would still (?) influence their grading.

So, yet another subjective evaluation of the evaluations? :frowning_face: This seems well intended but silly in implementation.

E.g., my high school contacted me YEARS after graduation for comments about the educaation provided. I think this was part of a drive for increased public funding (?) – higher propety taxes…

I had nothing to gain by being anything other than honest – I had no idea if any/all of the teachers that were involved in my education were even on the payroll in those same rolls.

But, I could still comment on the overall quality of the education, the issues that I saw problems with the material presented (lack of certain types of equipment that, in hindsight, would have improved the experience, etc.)

I left it up to the folks reading my comments to determine if the issues I raised were pertinent in their current system or had been fixed, previously (but, after my departure).

It also could act to embarass them if they had moved away from things that I had praised about my education.

E.g., American literature and American history were core parts of it – two years of each. When I went to college, I found most other students were woefully unversed in either.

Any school, especially one that is interested in maintaining a reputation as “one of the best,” should care about having instructors that do their job well. The school wants to know whom they should hire back next year, or promote. And their students are in a better position than anyone else to be able to judge certain aspects about how well their instructors are doing their job.

Those student evaluations shouldn’t be given undue weight, and should be taken with a grain of salt, but they do provide important information.

In my case, having professors who were the acknowledged experts in their fields was a significant part of being “one of the best”.

And, most one-on-one “teaching” occurred between TAs and students; the “professor” just conducted “lectures”.

TAs, of course, are grad students so they’ll likely be GONE in short order. What value grading them on their performance in those roles?

That’s a whole lotta “quotation” marks!

As far as I know, it’s not possible for the individual faculty member teaching the course to do this, although it’s possible for the institution to do so (and if, for instance, a student made death threats against a professor in their course evaluations, they would certainly cease to be anonymous in short order!)

Yes, that is correct. That’s why, as I said in an earlier post, professors are incentivized to grade students less rigorously than they might. You’re also correct that this is a problematic and counterproductive system in a lot of ways – but, on the other hand, most of the other options are also problematic in other ways. There are some obvious issues with not evaluating teaching at all, especially if you’re an institution that claims to value it, and if you are going to evaluate teaching, it makes sense to give students the opportunity to weigh in on the many, many things that students legitimately qualified to judge but a chair or faculty peer observing a single class might miss (whether the instructor gives timely and useful feedback on assignments, whether their explanations make sense to a non-expert, whether they show up for office hours, etc.) And if you wait until the end of a student’s degree program to ask them to assess the courses they took, you run into a whole other set of complicating factors and biases – they may not remember how they felt about a course they took years ago, or why; they may not even remember who taught it; and they’re far less likely to respond because they simply don’t care any more. Also, the primary audience for narrative comments, in particular, is the faculty member who taught the course, and it helps if they can have that feedback in time to make changes for the next semester, rather than three or four years after the fact.

There are lots of institutions and degree programs where this isn’t the case. If you don’t have graduate programs (or don’t have one in that particular department), you don’t have TAs. And, even at very large institutions with big graduate programs, big lecture classes with TAs aren’t the norm in the humanities, at least beyond the intro level. (And sometimes not even then – a skills-intensive course like freshman composition or French 101 simply doesn’t lend itself to large lectures.) Graduate students do teach in these programs, but they are usually the instructor of record, not a TA – and it does make sense to evaluate their teaching, if only because they’ll be applying for faculty jobs soon and they will be expected to submit “evidence of teaching effectiveness,” which usually means copies of course evaluations that include both numbers and student comments.

It’s an imperfect system and you’ve pointed out many of the problems with it, but a really GOOD system of evaluating teaching – one that balanced student feedback with regular classroom observation, and took students’ performance in follow-up courses into account – would be far more time- and labor-intensive, and most institutions just don’t have the resources.

you know this reminds of my nephews first year of high school back when people were just starting to put "how to do it " type of videos on you tube

Ok its the 2 or 3rd day of the brand new school year … nephew comes over and asks me how to make a you tube account and how to use it … so I showed him how and all of that and he said he had to do it for school

I was like “since when things like you tube become needed in school?”
He says its for math class because the math teacheradmitted to them doesn’t really understand the math shes trying to teach…and shes not really a math teacher in the first place …

Apparently she was hired because the freshman social studies/sciences teacher had to retire due to health reasons so she was hired … well first teacher didn’t want to retire so she found a 2nd and 3rd opinion that said she just had a few minor health concerns but they weren’t much to worry about… so she rescinded the papers which left her replacement without a job

And since she already had a year contract they stuck her in a math class she admitted to knowing about as much as the kids do and she pretty much used YouTube in the class and had the kids look up what they needed to for the homework passing along any useful videos and sites she found …

Needless to say no one really failed the class that year …

University professors being graded by students sounds really counterproductive to me - it negates the professors’ gatekeeper function.

Four decades ago I sat in the introduction lecture for the year’s electrical engineering intake (~ 450 persons) at the University of Hannover, Germany. The department head predicted that two thirds of us would wash out, most of these in the first four semeesters. The professors really thought (and were not reluctant to say) that most of us did not have what it takes to be a competent engineer, particular with regard to mathemathics. (I for one barely made it through vector calculus).

The professors did not regard us as their customers, but graduates as their product.

One trick my grad advisor taught me: When you’re teaching a course, always return every assignment, graded, at the very next class session. That way, when you get the evaluation results, you can look at the “Assignments were graded and returned in a timely fashion” question and use it to calibrate: If you got anything but the best mark on that one, you knew that students were marking you down unfairly.

Most of these learning management systems are created by third party companies, not the school (for instance, one of the more popular ones is Google Classroom). If I give my students a questionnaire on Google Classroom, and check the “make responses anonymous” button (which they can tell that I’ve clicked), then I have no way of telling who’s who. Well, unless a student gives me clues in whatever they responded, but the free-response questions are usually optional.

Being an expert and being a good teacher are two different things. It also depends on whether the university cares mostly about students or about professors getting grant money. I’ve heard a lot of my professor friends, who are famous in our field, discuss the pleasure of getting enough grant money to avoid teaching as much as possible.
Also, if you think that university administrations would risk the trouble they’d get into by breaking anonymity to help professors, I can tell you’ve never been a professor. I haven’t been either, but my daughter is one.

I am going to ask my kid this question tomorrow to see how he’d go about solving this. He’s 10. I’m pretty sure he can solve this at least with the “do it by section” approach. I don’t think he’s learned about averages at all. He’s clever though. I’ll report back!

Update:

I presented the problem to my son, as a sketch of a wobbly long rectangle and asked how he might go about figuring out an approximate area.

I told him the length could be considered to be understood (it’s a sidewalk, so imagine from one street corner to another) but the width changed along the way.

I sketched this on lined paper, so he instinctively said those lines were convenient and assumed he could break the widths down into something that looked like “unit squares” and count those up. So, his conceptual understanding of the problem was good; make smaller parts and sum them.

I cleaned up the sketch a little (it was originally wavy lines) and said of the first 20 units of length, the width was pretty much always 2 units. Then for the next 10 units it was more like 3, and so on. He immediately understood that, and said he’d be able to add those bigger areas instead until he’d done the whole length. Absolutely correct!

I told him that the story I read had the person add up all the widths and multiply that by the length. He laughed out loud and said “that’s way too big!”

He does not know how to calculate averages. I told him I was sure he’d understand and I could explain it. He said “mom, it’s Saturday, I’m going to play Fortnite” and walked away. Also, absolutely correct! (Nothing rude about it, he’s a fun and silly kid and I do tend to constantly try to teach him stuff).

I just nagged him if he knew how to find the volume of concrete and he knew to multiply by the height

So there you go. A 10 year old who hasn’t learned the basic math involved understood the sidewalk area and volume problem.

I’m a proud mama!

Thanks for the update! :smiley:

I explained the problem to my son as well, but since he was a college engineering student at the time, he just laughed at the idea that anyone would do this.

For that matter, somebody using the emissivity to that many signficant figures when the radius and distance are known to 1 significant figure is doing it wrong. I see that in young engineers where I work, often. The computer says the answer is 1.323519254651 meters, so that’s what they report, in a system where the precision of the measurements that went into the calculations are good to millimeters at best. Argh.

I totally agree with the sentiment of your statement, though in practice I would assume 2 or even 3 significant figures for the radius and distance. Even though we are told they are 3 cm each (which is indeed only 1 significant figure), they likely mean 3.0 cm or even 3.00 cm, whether they realize it or not.

When I taught chemistry, I was very precise in my problem statements. If I meant 3.0 cm or 3.00 cm, I would state it as such. But in the real world, people are often not as precise.

Fair enough. In a laboratory situation, that’s reasonable.

That is why I was half-joking about working it out on a slide rule: you cannot trivially generate false precision, plus it forces you to keep track of orders of magnitude (try it!)

Oh, I know how sliderules work - it will provide the significant digits but the exponent is your problem.

One of the important reasons I quite teaching college was because the students stopped caring about being students.

E.g., they couldn’t remember something they were told two days ago. Astonishing lack of retention of knowledge. They would cram (never a good idea) the night before the final and a few days later couldn’t remember a single thing on the test. Which made teaching courses that, as usual, required that they knew the stuff they were supposed to know from previous courses, impossible.

That a potential employer finds that they didn’t understand a thing in their field is not surprising.