Why have polls been so wrong lately, and why are they always underestimating the far right

Brexit in the UK wasn’t predicted by the polls.

Trump in the US wasn’t predicted by the polls

Fillon wasn’t predicted by the polls in France.

So what gives? I remember when 538 used to give amazing accuracy. In 2008 not only did they predict pretty much every race perfectly, they even predicted that the Minnesota senate race was too close to call. That race ended up being determined by a recount. They were also highly accurate in 2012.

However 538 said the chances of Trump winning the three states of MI, WI & PA were about 1 in 100. But he won them.

Why are polls failing so badly, and why are they all failing in the same direction by underestimating the far right?

538 also got many senate races wrong too, and the same thing happened. They underestimated the right wing. This is a far cry from when they got either all (or all but 1) senate race correct in 2008. So I do not think it is because people were afraid to admit they supported Trump or Brexit, because the polls also showed the dems winning several senate seats they didn’t win.

So what is going on? How did polling get so much worse than it was 4-8 years ago at predicting election results?

How are you defining “far” right, and why do you think the polling errors arise there?

It’s worth noting that Trump and Brexit are different situations. With Trump, the polls were genuinely off; with Brexit, they’d been showing the vote as too close to call for a month, and people simply didn’t believe that the polls were right. (There was a tiny swing toward Remain in the last week or so, but it was still well within the margin of error, and it seems to have been in response to a specific news event – Jo Cox’s assassination – so one might reasonably expect the impact of that event to fade once it was no longer in the headlines.) Basically, it’s a REALLY big mistake to assume your side will win on the basis of an 0.5% lead in the polls; what those polls should be telling you is that the race is close enough to swing either way.

I’m not familiar enough with the situation in France to know whether it more closely resembles Trump or Brexit.

Last election the polls were even more wrong, but in the other direction. We did not pay attention because when they predict the right winner it doesn’t matter much if they said 1% and it ended up being 4%. I believe they did pay attention and over compensated.

538 never said that. The lowest that 538 ever put Trump’s chances was at about 10%. You might be thinking of Wang’s model.

And the polls on Brexit, at least the ones shortly before the election, were correct.

It seems to me they’re bad at predicting the mushy middle than the extremes. If a hard core trump or Clinton supporter is asked who they’re voting for they proudly answer. It’s the folks in the center who are embarrassed by who they’re voting for and are therefore more likely to lie.

Right. This is what Nate Silver himself has said. The polls were off by a normal amount, around 3%, just like last time. And, yes, he did conjecture that it was because they were compensating for last time, assuming a higher Democratic turnout as happened in 2012.

The problem is, they were off in the states that mattered this time, and in the direction that mattered.

538 claimed 21.1% chance Trump would win MI, 16.5% WI, 23.0% PA. Multiply those 3 numbers together and there is a 0.8% chance Trump would win the three states of WI, MI, PA. But Trump won them.

That is not at all how elections predictions work.

To expand on DigitalC’s comment: 538 didn’t treat their state-by-state forecasts as independent events. (The algorithm was secret, but Silver was quite clear in interviews that poll movements in one state would affect 538’s forecasts for other states, and hence affect the national forecast, somewhat quite dramatically.)

I don’t know if there’s an answer that connects the USA with Britain and France. That’s going to be a hard set of dots to connect.

538 has done some stuff on what went wrong with the polls in the US, though. Here’s one example: Pollsters Probably Didn’t Talk To Enough White Voters Without College Degrees | FiveThirtyEight

Some of the polling trouble is part of a long-term decline in response rate/ease of polling (as people move away from answering landline phones); I also heard the complaint this year, can’t remember where, that the quantity of high-quality polling at the state level went down significantly.

Also, I believe on average the polls predicted the popular vote pretty much correctly.

Is there some technology that the pollsters are not taking into consideration?

They use verbal, telephone, and email, right?

For what it’s worth, something I noticed: I’d get calls that, if I picked up after the first ring, the line was dead. And so I’d push the button to phone whoever it was that called me, and it’d turn out to be some don’t-call-us-we’ll-call-you polling outfit.

And so I started picking up during the first ring, and so started relaying that I was a nice Jewish boy with a graduate degree and a hankering to vote for Hillary Clinton.

It’s possible that Trump voters were less likely to declare for their candidate of choice when polled; I’ve seen that mentioned plenty of times. But I’m also left wondering: what if folks like me got overrepresented, by dint of getting polled?

(People who think fast and have excellent hand-eye coordination, is what I’m saying. That’s what I’m trying to convey when I refer to “folks like me”.)

Kind of a crappy polling outfit if they require quick reaction time and excellent hand-eye coordination to make it through their likely voter screen.

The poll predictions made in July, August, September, and October couldn’t be verified. The results of the November general election showed that Hillary lost her bid to be POTUS. The results of the general election verified the results of the polling efforts. Who guessed right, and who guessed wrong.

My guess is that many of the polling organizations polled the wrong potential voters. Or ignored them. Oops. 538 says it uses the results of other polling organizations as the basis for their predictions. If the other polling organizations are right, 538 will be right. If the other polling organizations are wrong, 538 will be wrong. Garbage in, garbage out.

As far as I can tell, it was more than one outfit: it was different phone numbers, from different area codes. But I agree entirely with you: if I’m right, then that seems like a ridiculous way to run an operation (or, uh, operations).

Brexit was within error margin: … on the eve of the vote, the poll tracker was predicting a Remain victory by the narrowest of margins - 51 per cent to 49. Not that everyone believed the polling.

The thing about Brexit is many thought the people wouldn’t actually do it so, in the minds of many, it was a shock. It really wasn’t.
The US was different - people who voted regularly stayed at home, and people who had stopped voting perhaps a few elections ago re-registered - nothing pollsters can do about erstwhile off-the-grid (working class) voters. Perhaps they also made matters worse by interpreting ‘democrat but not Clinton’ as ‘undecided’

538 doesn’t use the results of other polling organizations. That seems to imply that 538 themselves are a polling organization. They’re not, and they’ve never pretended to be.

But it’s worth noting that 538’s methodology took into account that the pollsters could be wrong. They account for known systematic errors by assigning a house effect to each pollster. They account for random errors by aggregating all of the polls, and weighting them by quality. And they even, to the extent possible, account for unknown systematic errors, by assigning appropriately-large error bars to their forecasts. That last reflects what happened: 538 assigned a small but nonzero probability to Trump winning, based on a small but nonzero probability that there was an unknown systematic error. As it turns out, that small probability came through (as they sometimes do), and Trump won. Which was an anticipated possibility.

There’s always a possibility the non-fav in a two-horse race will win - odds of 5/1 means there’s a 25% chance of the event happening.

538 just presented the market in a different way.