First off, I’ll hold off on talking about the current election, until the verdict is in. But last time, 538 gave Trump roughly a 30% of winning. Not great odds, but not horrible either. The only way to know about how accurate those odds really were, you’d have to be able to wind the clocks backward and do it 9 more times, and see if Trump won two more. But 538 in no way ruled out a Trump win. They aren’t the Oracle of Delphi. They just have a statistical model.
A more accurate way to say more or less the same thing, as I understand it, is you’d have to find 9 alternative universes in which 9 candidates had precisely the polling numbers in precisely the same states as Trump, and see if around 3 of those candidates won.
I’d say that works also. In fact, I like it better.
538 did not say that 400 electoral votes was the most likely outcome and the forecast gave Biden a 89% chance. They forecasted a median of 348 electoral votes to Biden.
Overall, I think 538 did pretty good with the data they were given.
Plus, assuming Biden wins, on their “splatter chart?” if say Biden squeaks by with say 273, there is a “point” on the chart predicting it, or something close. One thing that I will do from now on is NOT get overly confident. Yep, yesterday morning, I was pretty sure Biden would coast, and I’d be in bed by 10pm. Never again.
Those are all weasel arguments. Saying “I gave Trump a 11% chance!” is not an argument that you were right. When you club that together with them making the same bad argument in 2016, you essentially are full of shit. Fact is that EVERY poll was WRONG by close to the exact same amount. That is not simply the law of probabilities popping up, that’s a systematic failure. Yes, the Electoral College makes those percentages leads to weird final results, opening the door for outside chances like these, but they have no answer for why the polls were uniformly wrong.
May I ask why some people don’t, or can’t, believe that the shy-Trump-voter is a real thing?
It should be pretty indisputable that voting for Trump is viewed as considerably more reprehensible in mainstream U.S. society than voting for Biden. Supporting Trump gets you called racist, fascist, sexist, homophobic, bigoted, Islamophobic, etc. Supporting Biden doesn’t. Given that, why would it be surprising that a considerable chunk of Trump voters choose not to reveal their intentions?
What I do find odd is that these shy-Trumpers think a pollster would doxx or harass them (pollsters are busy people and have no time or interest in doxxing anyone), but (some) Trump voters certainly have more reason to stay undercover than Biden voters.
My uneducated opinion is that an election with a 90% chance for one candidate should have the results be very definitive. Although the election may be an either/or situation, the votes cast are not. It would seem that a 90% chance of success should have the votes reflecting a solid bias towards the 90% candidate. But when the votes are coming down to just tiny percentages of votes having to go a certain way, it doesn’t seem like the situation is really described by a 90% chance to win. Any minor tweak to the votes could have the other candidate winning. That seems more like a 55/45 situation or something rather than a 90/10.
I think the relevant discussion to have is what is the purpose of the 538 models? Why do they do it? Whom do they do it for? Why do they publish it to the public? What use is the public supposed to make of them? What are their proper uses? Nate Silver needs to answer these questions.
All the polls tightened up in the last few days before the election. Where was it wrong?
Ipsos/Reuters poll. Conducted from October 23, 2020 to October 27, 2020.
Sample = 825 LV - Likely voters. Margin of error = +/-3.9 percentage points.
Quinnipiac University poll. Conducted from October 28, 2020 to November 1, 2020.
Sample = 1516 LV - Likely voters. Margin of error = +/-2.5 percentage points.
Their stated goal is to stop people from cherry-picking polls that suit their narrative. 538 aggregates polls and then applies adjustments based on historic biases and sampling issues. It’s basically a more complicated take on the wisdom of the crowd. It operates under the assumption that any one pollster can be wildly inaccurate or one poll result can be highly influenced by current events that will not be present on election day, but the sum of all polls will average out that variability.
Nate Silver has stated this pretty much constantly.
Now, of course the over-riding goal is to make money…but I don’t think there’s a good argument that being wrong would help in that regard.
The NYT Upshot polls did just that. They had three charts with the resulting ECVs: polls as-is, polls adjusted by 2016 error rate, and 2012 error rate. In all three cases Biden was ahead comfortably.
What do you consider his grades of pollsters to be but his opinion of their reliability?
The NYTIMES disagrees with you: https://www.nytimes.com/2020/11/04/us/politics/poll-results.html
I believe it was stated that his grades of pollsters are also based on hard data, not opinions. I don’t know the details of the model whereby he assigns grades based on the numbers, and one might suppose that in that model he creates bias against certain types of pollsters, but it would be at the model level, not at the individual pollster level.
His grades are based largely on how the polls are conducted: online or phone; likely voters, questions, etc. I understand why he rates them like this and before yesterday I though they were good. And yet the polls appear to have been off by more than last time. Remember last week when Wisconsin (?) had that Biden +13 (or something) poll? In hindsight it was laughably wrong.
Maybe these rating assumptions are wrong.
Then apply my questions to the polls. Why should we not ignore them?
Deeg continues to ignore the fact that I have provided very specific answers to this question, including a link directly to the full methodology and reasoning behind it straight from the horse’s mouth. Either he can’t be bothered to actually read the responses or it doesn’t fit with his narrative. He continues to prove that he has no idea what the methodology is. To wit:

His grades are based largely on how the polls are conducted: online or phone; likely voters, questions, etc.
When in fact, 538’s grades are largely based on how much better they performed than other polls and, based on that, how they are expected to perform in the future. Hell, he hasn’t dinged a pollster for online surveying since long before the first Trump election. Of course, all of that is already available in previous responses, but he wants to keep on keepin’ on.
Perhaps he’ll find a link to their scoring methodology in this post if he looks really hard.
Having a strong understanding of where the electorate is on a candidate and an issue is a critical input. It’s sort of the bedrock of democracy. Figure out what the voters want and the politicians should reflect that. Polls have been one of the most important ways that politicians and the media adjust their message and their policies. It’s not a good thing for polls to go away or be unreliable. That means the politicians are basing their votes and policy proposals on gut instinct. That’s not great. We need good polls, having bad polls is one of the many things that will send us into an incredibly dark authoritarian hellscape.
Now, maybe a more important question is why should we, the voters, care about polls results? I think the answer to that is emphatic, we should not. In fact, we should not just ignore them but we should be protected from them. Poll data will have a circular reinforcing effect on the electorate. Every poll you see in favor of something will cause those consuming the data to become MORE in favor of that trend. It’s supercharged peer pressure. But, living in a free society that’s simply not a viable fix.
Polls aren’t inherently bad. But inaccurate polls are extraordinarily damaging. Media reporting on poorly vetted or sampled polls is essentially propaganda. A lot of people should probably lose their jobs over this.

Given that, why would it be surprising that a considerable chunk of Trump voters choose not to reveal their intentions?
It wouldn’t be all that surprising. Especially because it wasn’t 20 or 30 percent we are talking about. Maybe 4 percent of Trump voters being shy would account for the error.
I’m not convinced that the polls this cycle were unusually far off on average. Maybe they were, but it is too soon to say this.
How about – in a close election, you can’t trust the polls. So in a state known for close elections, you should treat the polls more as entertainment and less as science. Despite that, I hope https://fivethirtyeight.com/ remains a free web site and expect to look at it many times again.
P.S. The gold standard [url-Iowa Poll: Trump takes lead from Biden days before Nov. 3 election]Iowa Poll[/url] was reasonably close. Maybe Iowa is the kind of state where people are too polite to hang up on pollsters. Or maybe they train their telephone callers more thoroughly. Or maybe Ann Selzer is lucky (but I doubt it is just that).