FiveThirtyEight.com decimated by Disney layoffs

To be honest neither had I. But it didn’t take very long to find that stuff. While some mistakes were made in trying to protect people from COVID, I agree that he’s not an authority and some of what he was spreading was just a repeat of MAGA conspiracy theories.

…Nate Silver was a proponent of “return to normalcy”, which was effectively a rebrand of the Great Barrington ideals. Masks aren’t needed. Herd immunity. Vaccine only mitigations. Isolation doesn’t help. People just need to go out and live their lives. Thanks for providing the cites. It needs to be said though that those articles are really only scratching the surface of how deranged his Twitter feed is regarding anything covid related. His latest is “just asking questions” about the lab-leak theory.

I thought his election analysis and his political punditry was pretty good a few years ago. But my gosh…I don’t trust him any more.

I think all of the Covid-related outrage and lack of polling was fucking up his model. People become blinded when their core prerogatives are threatened. Or maybe he was just angry that he couldn’t go to poker tournaments.

Stranger

If you look at the Atlantic link I posted above, you’ll see that Silver has been an ass for several years now.

I appreciate the debate about Silver. I must say, however, that I am surprised at how strongly people have wanted to defend him.

The thing is, ultimately, Silver was wrong in 2016, when the election was both close and truly important. He ended on election night giving Hillary just over a 70% chance to win. So it’s not even as if he made it 51% Hillary. He was solidly wrong and palpably wrong.

You can say the polling data was trash, but, had Silver seen through that and chosen Trump as the winner, we would rightly praise Silver for his perceptiveness. You can say that Silver was less wrong than most others (though not everyone else was wrong), but to me that’s damnation via faint praise at best.

He was wrong when it counted the most. He didn’t come through. I think that’s enough of a reason not to pay attention to him any more.

To be fair, it’s not that many are defending Nate Silver as it is they are defending statistics.

You are thinking arithmetically, not statistically. The logic of your argument is that Nate Silver is “wrong” because he predicted Trump had a 25% chance to win… and Trump won, which somehow proves that Nate was wrong for giving him a 25% chance of winning. And that logic is not true for reasons already exhaustively detailed.

The other thing is blaming “Nate” for what other polls were telling him. Go off on his model, that’s fine, but his model wasn’t giving Trump a 25% chance if the underlying data (the aggregated polls) weren’t giving Trump a 25% chance.

Someone could look at all of the times that he had a 75/25 or so prediction. If the 25 side won around one in four times, would it be fair to say he had a good model?

So, presumably, Silver was on TV and asked, “Who do you think is going to win?” And either he said, “I don’t know, it’s essentially a tossup,” or something equivalent, which would have been great, or he said, “I think Hillary is going to win,” which would have been the incorrect answer, right?

According to your logic, the final percentage could have been 99% Hillary, and Silver still wouldn’t have been wrong. There was no way for Silver to be wrong so long as he expressed Hillary’s chances as a percentage.

If that’s true, then that’s true for everyone who had a model. No one was wrong about the election.

Further, Silver had been praised before 2016 for being right and being right in detail. He had a good reputation. So, if we grant that he did good things for which he was praised in 2008 and 2012, what would be the opposite of those things for which he could potentially deserve criticism? Your reasoning would seem to imply that he could only do right and could do no wrong.

Here, I can say I know enough about statistics to identify that as a very tricky question to answer (and I don’t know anywhere near enough to make an attempt).

In the links I was looking at, there was some debate as to whether Silver’s percentages could be used as betting odds, and the consensus opinion seemed be that they could not.

Then there would be the question of whether his models outperformed those of others. If he was clearly underperforming, then I don’t think a claim could be made that he was good. But even if he was outperforming others, the question would still remain whether he was good enough.

I didn’t follow the polls closely in 2020 because Biden’s lead, after a certain point, always seemed formidable. In such a case, complex models don’t really accomplish a whole lot. They would need to be put to the test in another close election like 2016, but that might not happen for awhile.

This might be a good time to talk about the difference between a ‘forecast’ and a ‘prediction’. Saying that you believe person X will win is a prediction, which would be falsified as soon as the person loses. Saying, ‘Person X has a 25% chance to win’ is a forecast, and predicts nothing. It just gives you a range of outcomes.

Pollsters are forecasters, not predictors. Same with weathermen.

The only way to validate a statistical model is to run enough trials to get statistically valid results. We can’t do that with election forecasting, because underlying conditions change too much between elections.

All election models are questionable anyway, as they are often based on previous elections and methodologies and can’t capture the true complexity of voting preference. There are lots of debates among pollsters regarding how to capture ‘true’ information, and it’s not easy. Survey design alone can introduce major error, and pollsters argue over survey design all the time.

I’d like to see an experiment in ‘blind’ polling and analysis, where the pollsters have to complete their analysis and publish it before they can see results from any other pollsters. I think you’d find that their results diverge a lot more and we’d get a better sense of how much of what they do is just useless.

Silver’s methodology appears to be more rigorous than most, but he’ll be the first to tell you that there is significant possibility for error in any election model. No one can predict the future. Even pro gamblers only beat the spread by a couple of percentage points. Same with stock traders. I’m not sure why we’d expect pollsters to somehow have a lock on what’s going to happen in the future.

And no one was necessarily right, either. You could get the right result for the wrong reasons. If I had built a model saying that Biden was going to win the last election because my personal observation of pigeon behaviour suggested it, I’d still be ‘wrong’ even if the answer was correct, because my methodology was hot garbage.

Such is the difficulty of evaluating statistical models.

It’s possible that Silver just got lucky. When you have a hundred pollsters throwing out random numbers, someone has to be closest. I’m not saying his results were random, but that in any collection of forecasts someone has to be closest. It doesn’t mean they were the best.

As an analogy, consider mutual fund performance. Every year mutuals get rated, and some of them have amazing performance, and people run to buy them. But if you buy the top-performing mutuals, you will probably find that their performance regresses to the mean because they were just lucky in the past and everyone is guessing. See also the ‘sophomore slump’ in sports, where the best rookie players often have a worse year next year, because the best rookie players are often just statistically lucky. As they say, past performance does not guarantee future performance.

Being a former sports bettor, Silver understands this stuff. He qualifies his forecasts more than most pollsters.

You are making a very common and incorrect assertion that the point of this models is to predict the winner; in fact, these types of models are intended not to predict the winner, but to quantify the likely trends about predictions, and there purpose isn’t to give an objective observer foreknowledge of who will win but rather to provide knowledge about whether a candidate’s message and election strategy is appealing to voters. If Nate Sliver or anyone else could literally predict who the winner of an election will be a priori, there would be no reason to actually hold an election except for the bloody minded proceduralism of doing so. The reality is that all anyone can do is poll the voting population, develop an estimate for what the likelihood is based upon statistical confidence of the sample weighted by any historical or other biases, and use that to make an informed guess about a candidate’s performance in a forthcoming election.

Now, there is the ontological argument about whether a prediction model of the probability of a singular real world event can ever be validated since by definition you can’t rerun it repeatedly to develop a sufficient distribution of events to perform any post hoc ‘goodness of fit’ or posterior estimate, but that is just the nature of reality of trying to make any prediction of probability or likelihood about unique events. Given that the FiveThirtyEight model gave a much higher estimate any any well-regarded poll or other estimate it would seem that it was a better estimate of probability but at the end of the day an event either occurs or it doesn’t, and just because it happened despite a low probability doesn’t actually tell you that the estimate was right or wrong, just that low likelihood events sometimes occur.

Stranger

Sam_Stone, I don’t disagree with anything in your post. Thanks!

I don’t disagree with most of what you wrote either, but:

I think it all comes down to what that 70+% Hillary number actually means and what one is supposed to do with it.

I don’t deny that Silver had hedgey statements here and there on his website, but ultimately he had an online product for which he was getting a megaton of daily site visits as well as lots of media attention. It’s different than an obscure college professor with a model that no one but academics is looking at. My criticism is based on the context in which Silver was achieving fame and fortune.

I think it’s fair to say that 99% of people going to the site were there to get an expectation of who would win, and of that 99%, 99% never ended up thinking the site was for anything else.

And how are you validating your “99%” estimate of what people were expecting from Silver’s election model?

Stranger

It’s just a truism that people don’t understand statistics. It’s probably the case that a higher percentage than average who go to Silver’s site have some knowledge, so maybe it’s 90% or 87%. Dunno.

The defenses of Silver in this thread would be well over the heads of most people. Further, the defenses, aka, “the correct way to think about Silver’s model” is not how the media was talking about it.

Other than that, yeah, I don’t have polling data on it.

The general media gets most things wrong—often badly wrong—when they talk about any technical topic. That doesn’t mean that the people working in that area are wrong or failed to ‘dumb down’ their work sufficient to make it explicable; it means that many people in the media just don’t care to spend the time to understand things or get their explanations correct, and most consumers of news don’t really care about accuracy.

And I find it highly amusing that your criticism of Silver’s supposedly bad estimate has you presenting a completely fabricated and nonsensical statistic as an argument. What school did you study probability and statistics at again?

Stranger

It’s not just that. Silver was himself in the media, often appearing as a guest on shows to render his opinion, etc. He was himself also responsible for how people were processing that information.

Any person who was not trying to be antagonistic would understand the point I was trying to make. Thanks for playing!

Same one yo momma went to.

Every single time I’ve heard Silver talk about the predictions of his models, he is careful to a degree that is painful even for other statisticians to hedge his what the ‘meaning’ of them is, and that just because they they indicate a low percentage of some outcome doesn’t mean that it won’t happen. Your objections all seem to be based upon assumptions that you brought into interpreting the models rather than anything that Silver actually said or promoted.

And the irony was that in making your point about the criticism of the inaccuracy of statistical predictions you literally made up a statistic out of whole cloth. It’s like lecturing someone about the immorality of eating meat while dining on a Steak Diablo with a side of Bolognese sauce.

Stranger

I’ve acknowledge he says hedgey things. Was that, is that, good enough? It’s matter of opinion. I don’t think so.

I guess you don’t know how humans talk. “99%” is used as a stand-in for an arbitrarily high percent. Glad to help!

Also, I have not been criticizing any “inaccuracy.” I have been criticizing 538 as a communications tool.

What should he do, then? If he builds a good model that is nevertheless an estimation, and he gives constant caveats and accurately states that his model is an estimate or forecast, not a prediction, is it his fault if the media don’t listen?

Forecaster: “Look, this is an election model. There is a lot we don’t know that may not be captured in it. Any percentages just represent where the model is today with respect to forecasting the result, but it could be wrong.”

Reporter: “Got it. So who does your model say will win?”

Forecaster: “Again, we aren’t predicting anything. We have a forecast that says of all potential outcomes, Hillary is favored to win about 30% more of them than her opponent, but it could be wrong.”

Headline: “FORECASTER SAYS HILLARY PREDICTED TO WIN.”