538 is No Longer a Credible Source

Right. Not “right” not 'wrong" but the overall performance can be measured.

I did not claim that, there certainly could be. But do we have enough with Silver on his own, without the whole 538 team to back him up? 538 during the Silver years probably does. That doesnt mean anyone prediction is right or wrong, of course.

:thinking: Hmmmmm

Simon Rosenberg (Hopium Chronicles Substack) gets in a dig at Nate Silver – and in doing so, peels back the curtain a sliver regarding Silver ‘losing’ FiveThirtyEight (my emphases):

As for Nate, I adopted the Hopium brand because in the final week of the 2022 election he went on a wild tirade against me on his podcast for claiming there would be no red wave. He said I was smoking Hopium and misleading people. Of course he completely blew the 2022 election. It is because of his arrogance and poor management of 538 that “Hopium” became a thing, and he lost control of the site he built. So no Nate is a not really a friend of ours here at Hopium, and I prefer other sources of data and analysis.

Silver “lost control of the site he built”? He left unwillingly?

What was he wrong about? He said she had a significant chance of losing, and she did.

Where did he predict that? He didn’t put that prediction up on his website.

So he hasn’t predicted a dozen races, because each individual race was a single event? What do you call a whole bunch of single events?

He sold 538 to Disney a while ago but stayed on as editor in chief. Last year he announced that his contract would not be extended at the same time that Disney laid off half his staff.

Yes, my understanding was he left in protest after Disney gutted the 538 staff in a money-saving move and took his models with him. If that was truly what happened, then I can’t blame him.

FWIW I found this compare and contrast of the Fivethirtyeight, the Silver, and the current Economist models from while Biden was still in it, and Silver’s critique. It seems to make sense to me.

Bottom line to me is that each start off with best guess assumptions and decisions about which history is your precedent.

I suspect that their probabilities for Harris v Trump will converge fairly quickly.

I skimmed the first paragraphs and Silver just repeats the obvious point that others have tried to explain to you: A predictor should get 30% of its “70% predictions” wrong.

I wonder if prediction markets might be the most “reliable” predictor we have. At Polymarket Trump peaked at 70% about when he put a bandage on his ear, but the gamblers now show Trump over Harris by only 53 to 44.

I’d put little faith today in models. Models are trained with past data, but “Toto, we’re not in Kansas anymore.” No matter how detailed your records of the animals in a particular zoo may be, that data loses relevance when the animals have escaped and are on the rampage.

Never mind. I misread and this post has been deleted.

He said when Silver said such and such as a good chance of happening - and it did- that means Silver was right. But when Silver said Hillary had a good chance of winning- but she didnt- he claims that if you just give a %, you can’t be wrong.

That last bit does have logic. If someone predicts a batter will get a hit 25% of time time- but the batter does get that hit- the predictor is neither wrong- nor right, since he was predicting ODDS.

So yes, if you want to take they were all % odd- thye they could not be wrong. But nor could they be right.

As was said-

So, yeah, in Silvers Hillary prediction he just predicted the odds- so he could not be wrong. But neither could he be right.

But then he said I was innumerate as I said Silver was then 'wrong" in calling Hillary Clinton/trump.

Sure, in giving odds you cant be 'wrong" but then you cant be 'right" either- and Thing.Fish stated Silver was 'right" You can have it one way or the other, not both.

He said Hillary had a 60% chance of winning.

You have not been reading these last few posts, have you? The whole debate about Right and Wrong has been going on for a while now.

No, prediction markets are terrible predictors. There are too many irrational bettors out there, and it only takes one or two to skew things.

We can’t say that Silver was right or wrong about Clinton, because we can’t say that he was right or wrong about any single individual race. We can say that a forecaster is right or wrong when we look at a great many predictions they make. Silver has made a great many predictions, and he’s been right on those about as often as his percentages say he should be.

In other words, there is no way to say that Silver is wrong, but there is a way to say that he’s right. Or to distill it down to two words, “he’s right”.

The thing is, if he were to be always correct in his predictions in terms of picking a winner (whoever has the highest probability), his model would likely be wrong, and underestimating win probabilities. If there’s fifty races running 70-30, and you end up with almost all fifty wins in those races, you need to check your model. People just don’t seem to under and probabilities very well.

To evaluate that you need a meaningful n, a meaningful comparison with how other forecasters have also matched their percentages, and the analysis done by an independent neutral third party.

I know Silver claims to have done that and shared his work, but I have great faith in his ability to choose the exact methods that confirm how good he is.

Plus the forecast made a week before and the forecast made three months before are very different beasts and should be evaluated separately. By the week of election the forecasters all rely more on polling and less on “fundamentals.” Assumptions of future poll volatility become less important as the data is basically all in. The difference between the models then are based on how each models rank possibilities of different sorts of polling errors to predict behaviors and use some polls to better deduce signal from noise on others. And how much faith they have in those methods. But the difference between 538’s approach and Silver’s, big at the point of Biden’s debate, will be little by then. 538 stood out of the crowd because they assumed the longer run of history’s poll volatility and therefore went with a fundamentals as priors heavy approach, while Silver and The Economist went the other way. There was no way to really know which set of assumptions was the better one, even if we all had our own conclusions about it.

Exactly my point.

One must also consider the possibility of a faithless elector throwing RFKJr a meaningless vote.

Yeah, bordelond mentioned that possibility. But frankly, if we’re counting that, I’d put the odds at more like 50%. This is going to be a weird election.

I would ask you to back this up if I didn’t already know that you can’t.

To my mind the larger question about the prediction market is: “Are there more or fewer irrational bettors vs. irrational voters?”

ISTM the real wild card in every election since about 2012 is the ever increasing fraction of just plain insane voters driven by gosh-knows-what particular long laundry list of batshit irrationalities.

The crazies, like the poor, will always be with us. But we seem to have gotten a lot better at manufacturing them of late. Or at least somebody has gotten a lot better at manufacturing them.

The thing about prediction markets is that it only takes a small percentage of rational players to override the rest (by taking their money). Voting is just an averaging mechanism. But if a prediction market is way off from what it “should be”, then it only takes one rational player to correct that by making a large bet.

Which isn’t to say they’re perfect. There’s the known favorite-longshot bias which amplifies unlikely scenarios. The reason is probably that it’s hard to make money betting against these since it means making a 99:100 odds bet or the like. So what should be an 0.01% scenario becomes a 1% scenario.

But aside from certain biases like these, they tend to do well. Not necessarily as well as one good analyst. But probably better than any given random analyst with a unknown success rate.

No, the model doesn’t attempt to predict the possibility of faithless electors, only the popular vote results.