damn it, preview!
kaje - being one myself, demographically speaking, I certainly have nothing against ‘gen-x’ers’ and probably have a similar educational background to the one you cited.
Manda JO pointed out why I noted the SAT curve very efficiently. The citation I used was just the first one I came across, but it made essentially the same point, which is that while colleges can and do adjust scores for curve, and additionally adjust grades for weighting, the media and general public don’t, and neither do many aggregate ‘averages’ reported by school districts, states, etc. I like what weighting tries to accomplish. I’m not so sure that it is successful when the same amount of inflation may be happening BEFORE the weight.
Parental pressure I believe would only be a ‘problem’ (since parental involvement is great and the lack thereof is a reason for decline of schools) in middle schools and high schools, not colleges. Conversely the pressure to attract students with good stats, like low dropout %, would be an issue only with colleges.
As I said, most of what I had was anecdotal. If I have some time tonight I’ll try to research it a little better for facts.
This is an excellent point about the increased pool of SAT takers, but there’s more to it than that. Educationally, we are in a new area where schools are being held accountable for student achievement. In Elementary and Middle school there are many ways to measure student achievement (CTBS, ITBS, CAT, etc.) but in High school, there has been few measures of achievment. The most popular are graduation rate and SAT performance. As an indicator of improvement, schools are encouraged to increase the number of test takers and increase their performance. While this is a wonderful attitude and request, and is truly what should be happening in our schools (preparing students for college regardless of their current intent to attend), the truth is that when SAT participation jumps, performance drops. (Note the generalization.)
However, this was not the impetus for “The Great Recentering of 1993”. In 1993 the norming group for the SAT was changed. Up until that time the norm group was a small group of private school males from the mid-50s (for some reason I recall about 1200 kids from 1953.) There wasn’t much “diversity” in this cohort - pretty much a bunch of white males from UMC, conservative families. Of course, this pretty much represents those who went to college in the 50’s and early 60’s. As more minorities and females began taking the SATs (and enrolling in colleges and universities) colleges needed to compare applicants to a different cohort. It just took ETS a while to catch up and generate a norming group large enough and reliable enough to use.
The recentering was not done to bring up the scores of the current crop of test takers to some prior “standard”. Further, the renorming did not “expose” a dumbing down of test takers or a precipitous drop in scores. True, a recentered score could be as much as 100 points above what it would have been had it been normed on the original scale, but this is not a “bump” to improve the look of the scores.
Fretful, what you need to know about the British educational system is that they don’t use the same grading scale as the U.S., where 90%+=A, 80%+=B, etc.
In England, (or at least in Oxford), a “60” is considered passing. An 80 is “superb work,” and an 85 is “brilliant and original work.” A 90 would be all but unheard of, and a 100 – that commonplace mark of A+ in the States – does not exist. My quotes are from my Oxford graduate student guide, but the same goes for undergrads at most universities all over the country.
Over here they don’t buy into the theory that not knowing half the material “demoralizes” a student. In fact, IMHO that construction is a “I’m OK, You’re OK” approach to doling out grades. **
OK, maybe there’s something I’m REALLY missing here, but it seems like handing out passing grades for getting half the answers wrong is much more of an “I’m OK, you’re OK” approach.
Does this exam cover material the students have been taught and are expected to have mastered? If so, it seems reasonable to expect that anyone who has successfully learned the subject should be getting 90% of the answers right. If not, what on earth is the point of testing somebody on something they can’t reasonably be expected to know?
originally posted by Fretful Porpentine
OK, maybe there’s something I’m REALLY missing here, but it seems like handing out passing grades for getting half the answers wrong is much more of an “I’m OK, you’re OK” approach.
It all depends on the test design. If your view of a good test question is the “prove that you know this fact, have memorized this date, can recite this definition” type question, then you might have a point.
But if the instructor writes questions that require the students to work beyond memorization and demonstrate their ability to apply their knowledge to an unfamiliar problem, or to demonstrate that they have not only acquired a lot of little pieces of knowledge but have managed to integrate them into a useful structure, then the grades may quite reasonably run lower.
(In practice, I believe that there is a role for both of these kinds of questions in assessing the students’ mastery of the course material.)
We have along tradition in the U.S. of assuming that 70% is a minimum passing grade, 85% is average, etc., but those are really pretty arbitrary numbers. And after more than 20 years of teaching, I have yet to get the hang of designing a test to such predefined numeric criteria. If I knew ahead of time how the students would score, I wouldn’t need to give the silly test! The SAT and other standardized testing folk are able to do this, because they have huge banks of potential questions that they validate by giving them to sample populations and correlating the performance on thoise questions with other indicators (e.g., student’s grades, performance on older previously-validated question sets).
Consider this: if the purpose of a test is to distinguish the amount/quality of learning by the students, then from a statistical point of view an average score of 50% is ideal, because it allows equal room on either side for individual students to demonstrate how far above or below the average
they actually fall.
===========
An aside: I use the phrase “the purpose of a test” quite loosely, because there are many reasons to test, and a given test may serve multiple purposes at once.
And now, class, your assignment for today is to consider the possibility of designing a test to not only demonstrate to the professor what the student has learned, but to also demonstrate to the student the things they have not yet learned so that they can prepare better for the final or perhaps study up a bit before taking the next course that has this one as a prerequisite. How does that second goal affect the likely class average?
===========
Back to the main point:
Numeric scores are useful, but mainly as a way of ranking the students within a given class. After that, the instructor must make a judgement where in that score range to give the A’s, B’s, etc. That’s a subjective judgement and always will be. Even if an instructor goes with the the old 70%-is-a-pass rule, it was a subjective judgement to say that rule was reasonable for this test (as well as a hundred smaller subjective judgements performed during grading to avoid being too hard or too soft to meet the 70% goal).
A good instructor will have specific curricular goals in mind in designing and grading a test. “Anyone passing this course should know… and should be able to apply that knowledge to do…”. The instructor then needs look at what the test results say about the students’ achivement of those goals. As an instructor, once I find a couple of test papers that are clearly “A”'s, “B”'s, … and "F"s, I can rely on the numeric scores to iterpolate the rest of the students into that range.
Much of what gets writeen about grade inflation is based upon a false worship of absolute numbers. The few credible studies of the practice have to be based on some form of external validation of the scores.
(And every now and then I get an entire class in which nobody gets 50%)
I’m curious as to why after kids are killed or go missing suddenly, according to the media, they were A+ students all along. This kind of grade inflation is ridiculous.
It all depends on the test design. If your view of a good test question is the “prove that you know this fact, have memorized this date, can recite this definition” type question, then you might have a point.
But if the instructor writes questions that require the students to work beyond memorization and demonstrate their ability to apply their knowledge to an unfamiliar problem, or to demonstrate that they have not only acquired a lot of little pieces of knowledge but have managed to integrate them into a useful structure, then the grades may quite reasonably run lower.
OK, that makes perfect sense in the humanities, or even the sciences at the university or graduate level, but the percentages in sirjamesp’s link referred to a mathematics test, and I got the impression that the test-takers were still in secondary school and applying to universities. At that level, mathematics is mostly a matter of either having learned, or not having learned, how to solve X type of problem. So I’m curious about what, exactly, this test covers.
Oh well, it’s off topic and not terribly important. (I’m not going to give my thoughts on the OP because they don’t belong here, but the minute this thing gets moved into GD it’s open season…)
Quoth sirjamesp:
To obtain a grade C at GCSE in 1988, a score of 65% was needed. Now, the necessary score is 45%.
I’ll use this as an example, but this can also apply to other tests. Just saying that the passing grade decreased does not necessarily imply grade inflation. Maybe the questions just got that much harder, so they had to change the grade scale to compensate. Unfortunately, that’s something very difficult to measure.
originally posted by Fretful Porpentine
OK, that makes perfect sense in the humanities, or even the sciences at the university or graduate level, but the percentages in sirjamesp’s link referred to a mathematics test, and I got the impression that the test-takers were still in secondary school and applying to universities.
True enough, and despite my skepticism over people who argue grade inflation based upon absolute scores, I have to admit to frequently being appalled at the level of mathematical knowledge of many of our incoming freshmen.
Now, we make more than half of our incoming students take remedial mathematics courses, so I could argue that [finger-pointing]we’re not the ones exhibiting grade inflation - it must be the high schools[/finger-pointing]. But that would be a tad dishonest. You see, when over half your freshman class isn’t ready to take calculus, and you’re a science dept, are you really going to tell all those students up front that they can’t take any courses in your major until they’ve taken one or two semester of precalculus math? That would be departmental suicide - everyone would enter a different major. So we don’t so much inflate the grades as we do dumb-down the beginning courses so that the typical freshman stands a chance of surviving them. Then we ramp up the math requirements in each course in the major sequence, so that eventually, we hope, they catch up.
Another not-quite-grade inflation problem: we require all students to get at least a “C” in all major courses and they must do so before moving on to the next course in the degree. In the absence of grade inflation, we get stuck in two ways:
- The senior who fulfills all their graduation requirements except that one last course where they got a D or C-. Technically, they passed the course, but they have not met the degree requirement. Try explaining that distinction to an angry parent who wants to know why we’re deyning a diploma to their child!
- Forget whether or not an average grade is “B” or “C”. The simple fact is that most instructors view “B” as the “you’ve learned the maijor things I wanted you to learn from this course” grade. When students move on from one course with a “C” into a second course that has the first one as a prerequisite, they seem to have a hard time understanding that their previous “C” was not a good grade. I have to explicitly explain in my course syllabus, “If you received a grade of less than “B” in the prerequisites, that means that your instructor believed that you failed to learn something important, something that I am probably going to assume that you know. That means that you are already behind in this class, and need to put in some extra studying just to get up to the starting line for this course.” No other statement I’ve ever put into a course syllabus has gotten me so much flak on the student evaluations as that one.
(Forgive me. I do seem to be in a bit of a ranting mood today.)
I did preview that last post. I really did!
Feh. The OP asks for facts, I asked for facts, and what do we get. “I’m in rant mode.” “IMHO.” “Well, that’s anecdotal.”
This thread is closed. Sorry if you didn’t get your answer, don.