Can someone explain the A Level and GCSE scandal in the U.K.?

I understand what the tests are from a previous GQ thread, but what is the current scandal all about? It seems like every news story starts in the middle and assumes I already know the background.

Did the scoring system change? Some sort of weird Covid adjustment?

Its a Covid adjustment. A Levels (the optional UK high school exams that are used to qualify for college admission) were cancelled this year due to Covid. So a system was needed to work out what a students grades would have been if they sat the exams. University offers are usually in the form of something like “2As and B”, meaning you are accepted in a particular university on condition you get at least 2As and one B in your A levels (which you typically take 3 or 4)
.
So the government in its wisdom, rather than just take the “predicted” results that all students usually get based on their work up to that point, they would apply an algorithm to those predicted results to produce the “real” result (I’ve not actually been able to find a technical description of this algorithm). This was hugely controversial as it led to a lot of results being downgraded (especially in low income areas). And hence students in low income areas who had never sat any exams were being told* they had failed them and couldn’t go to uni, where as posh kids in private schools were much less like for that to happen.

    • Actually in the last hour the government did a U-turn and decided to accept the “predicted” grades unaltered.

GCSE’s haven’t been announced yet, but with A Levels - these are the most important exams you sit at school and the grades will influence your acceptance at university. Each university course will set it’s own grades you must achieve.

So A Level results are mega important.

In A Level year, a student will also sit ‘mock’ A Levels, usually around January time - a kind of half-way test to see how you might do in the final exam. This gives teachers a fair-ish idea of where you’re at, and a kick up the arse for students who need to do better. But it’s quite usual to do better in the final exams than the mocks (let’s face it, you revise harder for the real thing).

But this year = no final exams. So teachers submitted their own predictions on what the students should achieve - partly based on the mock results and partly just ‘knowing’ the students. Teachers do this every year, to help students decide which universities they might apply to.

Now, call it human nature, but teachers’ predictions are usually a bit more generous than final results, so the government decided to introduce some sort of algorythm to get more accurate results. It has resulted in a substantial amount of ‘downgrading’. Now, this wasn’t unexpected (else why bother), but what has transpired is that it has adversely affected promising pupils in less brilliant schools - because if your school didn’t do great the last few years, then you, as an individual, get lumped in with that underachievement and downgraded. Hence an enormous backlash and numerous stories of kids from poor backgrounds getting turned down to study medicine, even when they’ve always been straight A students. It’s a right old shit show.

It’s moot now, but this is what they tried to do.

[QUOTE] Teachers were asked to supply for each pupil:

  • The grade they were estimated to receive in an actual exam
  • A ranking compared with every other pupil in their class

These were combined in a mathematical model - or algorithm - with the school’s previous performances in each subject.

The idea was that in each school, students would be given similar grades to those awarded to their school in the previous few years.

The teachers’ rankings would decide which of those pupils received the top grades in their particular school.[/QUOTE]

The big flaw was that it took no account of an outstanding student in a mediocre school of schools that were improving.

fwiw, the algorithm’s name is dominic.

So one thing that I hear from my parents on the subject is that conditional offers from Universities had already gone out, based on teachers’ predictions, and that some Universities have responded to the clusterfuck with “well, fuck it, you can have the offer anyway, even if your ‘official algorithm mark’ is too low.” Not that this helps GCSE students any. But is that widespread?

Also in Things I Have Heard … that schools who cancelled the mocks are particularly disadvantaged, and that a lot of schools cancelled mocks because COVID and because they were assured that it would all be okay.

Of course, none of these problems will be a problem for the children of all the Old Etonians in government, and their acquaintances… :unamused:

One of the consequences of this was that if someone had got a “U” grade (I.e. ungraded/failed) in the past 3 years, then someone this year would get a “U”, regardless of actual performance.

It was also noted that if you appealled your grade and got it raised, they would in turn lower someone else’s grade to balance it out.

Apparently smaller classes didn’t have enough data, so they just went with predicted grades. The fact that small classes tended to be in public schools like Eton is just a coincidence :thinking:

As griffin1977 has noted, they’ve now given in and decided to use the predicted grades for everyone, but of course many universities have already filled their courses based on the announced grades, so many students will have missed out on their first choice.

It’s a funny kind of thing – teachers know how important numbers like this are – but they actually know better. Which is why they so often tell us that exams are useless – typically, an exam result doesn’t tell a teacher anything they don’t already know.

And specifically, one of my professors did a research project on Engineering Student selection. The best predictor of student success in engineering at university was prediction by physics teacher at high school, and that was not improved any of the available exam results.

I don’t have exact figures to hand, but conditional offers have been the basis of the university application system for decades. This allows time for both students and universities to sift and sort their preferences, ending up with some degree of certainty for both, as students with offers have by a fixed date to settle on a preferred and a second choice, and allowing the vast majority a smooth-ish transition from the end of their school years to the new academic year at university.

But all that rests on a common acceptance of the reliability of the A-level results. These have always reflected a balance between absolute performance of the individual and the distribution of the different grades of the whole cohort (each examining board covers thousands of schools). Without exams this year, that second factor disappeared, replaced by this clumsy algorithm standardising in terms of individual schools’ performance.

Now that the Whitehall clown car has performed another screeching U-turn, universities will presumably find themselves with a whole lot more students meeting the conditions of offers than they had expected, as there will be only the schools’ assessments to go on, with no external validation, either of absolute performance or cohort-standardised.

And they can’t even just take a year out and go backpacking round Europe working menial jobs to pay their way, either…

In fact it’s worse than this. Two things to note:

First, teacher had to rank every student in the group individually. You couldn’t just divide the group into, say, deciles. If you thought a bunch of students had similar abilities and aptitudes, you couldn’t given them equal ranking; you had to rank all your students sequentially,

Secondly, if the group had more than 15 members, the algorithm completely ignored the teachers’ predicted grades. It simply looked at the distribution of grades acheived by pupils at that school over the preceding three years, and awarded these to this year’s students based on their ranking.

One commentator has described this as “attempting to calculate the size of each egg that went into an omelette, by looking at a different omelette”.

Letters in this morning’s Guardian make similar points;

https://www.theguardian.com/education/2020/aug/17/inbuilt-biases-and-the-problem-of-algorithms

And from the universities’ point of view, the system is based on their getting the grades from the exam boards a few days before publication, so that they can sort out whose offer is automatically confirmed and which/how many of the “near misses” can be confirmed, timed so as to forestall the phone calls from anxious pupils and parents. Now they’ve got to do a second run-through on all the people they didn’t confirm offers on, and work out what that does for their numbers and all the planning of teaching and everything else. And the phone calls will be coming just the same (I’ve worked in this field: it is not fun at the best of times).

Which then apparently had a knock-on, as the algorithm used was intended to prevent any overall grade inflation- so as the tiny classes got given their predicted grades, which were higher, this resulted in everyone else’s being slightly lowered, because there were only so many top grades to go around.

Another issue is that quite a few classes are smaller than normal due to restrictions on numbers of students in a lab, for example, and some kids who were aiming for some of the competitive subjects with a practical component, like medicine, aren’t getting in on any course for the subject, even if they do now officially have the grades. Realistically, as these courses fill every year, a lot of the applicant who got downgraded will now never get on to the course they would have without this mess.

The GCSEs are not so likely to cause major issues, incidentally. The grades may well be as questionable, but many vocational courses are decided more on interview than grades, and flexibility is likely, especially for admission into their school’s 6th form which is a common route.There’s still likely to be problems, but not on the same scale and with less impact on future employment.

Story told before…

My first year of Uni, I passed four subjects with flying colors and had to retake 2 exams in September (Algebra and Calculus). This matched my own predictions from back in October. As I left the train, I expected my father to receive me with a stern " We Need To Talk".

Instead he gave me a funny look and asked how my trip was. “Woah woah, where’s that ‘we need to talk’? What happened?”

Turns out he’d run into my 12th grade Physics and Draftsmanship teacher (two different subjects), who’d predicted my university grades almost exactly. The only blip was that he’d expected me to have 95% or better in Draftsmanship (which for ChemE was not a 1st year course, so I hadn’t); my 100% had been in Crystallography, to which the teacher said “ah yes, it’s similar!”

So Dad had accepted that hey, if what I’d done was exactly what the Physics teacher expected, I might not have done as badly as he himself thought :stuck_out_tongue: (Dad reckoned anything below 100% was an F).

And I’m another datapoint in favor of “Physics teachers know who will do how well in Engineering school”. They better, 90% of our work is based on their lessons!

Is this happening in other countries? I assume every country is having to deal with similar issues for school leavers, be interested to know how they’ve managed it (better, presumably).

https://www.theguardian.com/education/2020/aug/12/school-exams-covid-what-could-uk-have-learned-from-eu

Hey, we’ve had re-adjustment for years – (vic.au)

Not for me, my assessment was external exam results, but one of my younger friends was adjusted down because nobody else at his school had done as well on external exams – so his in-school assessment obviously wasn’t as good as if he had gone to a school where everybody did well on external exams.

I think that perhaps that would be more contentious if it wasn’t an adjustment that is done at schools where nobody else cares because nobody else is going to university anyway.

I’m southern hemisphere – our COVID school exams won’t happen until October/November.

That’s maybe true of a specific area like engineering, but in Canada at least, there has been huge grade inflation at the high school level. It’s a big issue.

There’s a lot of pressure on teachers to give good grades. Rather than have a conflict with the parents and student, it’s easier to just boost grades and let the Uni system sort it out. This is especially bad in small schools.

We have close friends who live in a rural area. Their son was a “genius” getting a 96% average in his last year of high school and even 100% in some courses. They literally expected him to attend Harvard and he wrote the SAT’s. They were stunned: their “genius” scored low 70th percentile. So much for the validity of a 95% grade at a small rural school.

FWIW, Canada does not offer any standardized admission tests like the UK or USA, but there is some pressure to do so because of grade inflation. But that’s a subject for a different post. Another reason for this (in Canada at least) is that as schools seek to increase revenue, there has been a massive increase in foreign students, resulting in more competition for the spots. (But that too is a subject for another post)

In the US, the Big Important Tests (the SAT and/or ACT, which everyone going to college takes, and the AP tests, which advanced high school students can take to get college credit in some courses) were done online instead of in-person. There’s believed to have been widespread cheating, which may have caused some colleges to deflate or de-weight those tests, but the scores are what they are.

Actually I’m not sure about the SATs-- Those are usually taken in the fall, and so most students wouldn’t have been effected (yet). But it was definitely an issue for the APs.

According to a social media post I saw recently from a teacher, it’s not that the teachers tend to be generous with predictions, rather that in a class of (say) 5 students who are all working at (say) a ‘B’ grade level for the last 18 months, when it comes to final exams, one will probably ace it and get an A or even A*, 3 will most likely get a B, and one will probably flunk the exam and get a C or even a D. Hence in a typical year the grades will be ABBBC (or similar). However, the key point is that the teacher has no way of knowing in advance WHICH student will be the one to flunk (or indeed which will ace it) because there are too many external factors (topics in the exam paper, family circumstances, illness etc). So quite rightly they predict BBBBB as the grades. Then the algorithm comes along (with its prior expectation of ABBBC) and downgrades someone, essentially at random, to a C. And really this was entirely predictable from the outset. I’m not involved myself so no doubt I’ve overlooked/simplified some things but I think this illustrates one of the main problems nicely.