Fair enough . Still, the question is whether these experienced politicians were themselves data-driven or not. If it’s just their intuition at work, then we should probably look at the model. If their experience was telling them that someone 4% down in the polls is probably not going to win a race where they need a 4% advantage, then maybe it counts for something vs. a model that explicitly ignores the polls.

What Politico is reporting is that it was their most recent internal polling data that convinced even Biden’s inner circle there was no route to victory.

I like to think they are.

And that’s a distinction with a truly meaningful difference - trusting the experts over an outlier model because you know or believe they are data driven is fine.

Trusting the experts over an outlier model because of their experience at “winning” defeats the purpose of any of these models in the first place. The models then just become a form of confirmation bias.

DrDeth objected in another thread to my describing Nate Silver’s model as having a “consistent track record of success”, given that Hillary didn’t win.

Consistency is not the same as perfection. Reviewing off the top of my head:

2008: Predicted Obama’s victory, and called 50/51 States correctly (yes, I know DC isn’t a State, deal with it nitpickers).

2010: Predicted big Republican gains in the midterms, while many other pundits were unrealistically optimistic about the Democrats’ chances.

2012: Predicted decisive Obama victory, while much of the media were hyping the race as being a “dead heat” in order to draw eyeballs.

2014: Honestly I forget, I’m not *that* much of a nerd.

2016: Said that Clinton was only a 2-1 favorite, while much of the media was claiming she had it in the bag. Said that Clinton was far more likely than Trump to win the popular vote while losing the election.

2018: Predicted big Democratic win in midterms; many others were much more pessimistic.

2020: Predicted a close race with a only a slight edge for Biden.

2022: Predicted that Democrats had about a 50% chance of keeping control of the House, and that the majority would likely be a narrow one.

So I’m counting 2008, 2010, 2012 and 2018 as cases where he called one party’s chance of winning as being *much* greater than the other’s, and he got them all right. The others he called as being close elections, and they all were.

I stand by my original statement.

Which means- he was wrong.

I only heard is was gonna be a walk over by Obama, predicting a sure thing everyone predict is not that big of a deal.

The other big three also predicted that same thing.

https://www.politico.com/election-results/2018/house-senate-race-ratings-and-predictions/

Most everyone else did also.

Mind you- he is not bad, but he makes mistakes, and usually he goes along with the other pollsters- who also predicted Hillary to win.

And he no longer has a staff- in those cases it wasnt just him, it was 538, not just Nate Silver.

So now, it is just him, so now his score is back to zero. And so far, his predictions are meaningless.

Let’s say that someone is going to flip a coin twice. I am asked what the odds are of getting heads twice in a row and I say there is only a 25% chance. You flip twice and get heads each time.

Was I wrong?

No he wasn’t. This claim is innumerate, I’m sorry.

If I say a baseball player has a 25% chance of getting a hit his next time up I am not wrong if he gets a hit.

The other thing is Nate Silver gave Trump more of a chance in 2016 because he gave more weight than other prognosticators to the idea that a bunch of similar states could all have similar polling errors. The night before the 2016 election the states that flipped Trump were mostly polling 1 or 2% in favor of Clinton, which really only Nate recognized as something that could be close to a 1/3 die roll in the aggregate. Other prognosticators were treating it as 5 or 6 independent 1/3 die rolls that Trump needed to win all of.

The actual reasoning behind the numbers was an important part of him showing he had the chops in 2016.

Do the chickens speak longingly of the moonlight? That has nothing to do with it.

He predicted Hillary would win- she didnt. I mean, based upon that every odds maker is right every time-

“I predicted Dewey had a 60% chance of beating Truman”.- well if you are only predicting chances- you are never wrong.

Even if you say “trump only has a 10% of winning this next election”- and he does- well, you are still right. See note all the rest of the times Thing.Fish didnt give %, just that he preicted X, and it was X- but no, he didnt- he never does. He would say something like “OBama has a 60% chance of winning” but you see then- if he is right- Genius! but if wrong- well, "I predicted the odds, not the outcome. "

So if you say

And someone gets 8 heads in a row, or no heads in a row- you are not wrong. You didnt predict the outcome, you gave the odds.

See, you are not predicting he will get a hit or not- just a %, so you are never wrong. So, then your prediction score is 0 or 100%- because all you predicted was the %.

So based on that- 538 or Nate Silver would never be 'wrong". But I am saying- if you predict something that is quite likely- 60% and it doesnt happen- you are wrong.

Yes, well, that is in fact innumerate.

It’s true that a probabilistic prediction can never be “wrong”, but there has never been a case where the results have differed dramatically from what he predicted. If, for instance, Trump or his opponent had gotten 350 electoral votes in either of the last two elections, he would have had some egg on his face. It may well happen someday that he gets an important call grossly wrong, but it hasn’t happened yet, and the sample size is getting pretty large.

Nope, that’s not how it works.

If I assert a player has a 25% chance of getting a hit every time up, and at the end of a year of 50 at bats he bats .252, I was clearly correct in my assessment. Furthermore, if in one stretch he got a little hot and went 6 for 15, I’m still right.

Silver in 2016 didn’t just say “Clinton has a 70% chance of winning.” He had predictions for every state in terms of the percentage of vote and likelihood of winning. Looking across all the states, almost all his predictions were within the margin of error. His model was quite solid, and his model has been proven over many elections to be quite solid. Silver had by FAR the most accurate election model. If you wanna see what a wrong prediction looks like, go back and look at Sam Wang, who said it was sure Clinton would win, using a model that… well, I still cannot fathom what he was doing.

I’d further point out that if you think sixty percent is “quite likely,” you and I have different definitions of that term. Sixty percent is rather closer to a coin flip than a sure bet.

Put it this way: if you’d bet money on every election since 2008 based on Nate’s predictions, I’m sure you’d be ahead, despite the fact that in many cases you’d be betting the favorite and not getting great odds, and despite that bad beat in 2016.

Or if you had a chance to bet your house on a 2:1 favorite, even if you were getting, say, 3:1 odds, would you think that was a good idea?

Hillary Clinton, and in the rest of the cases he either predicted close or went along with everyone else. 60% chance of winning vs 40%, is a 40 point difference. Huge.

Yes, over the long term. But Nate doesnt predict things *over the long term.* Each thing is a one time only election. So, if you predicted *a 25% chance of getting a hit every time up*- and he got a hit *that time*- would you be wrong or right?

No.

There’s a 1 in 6 chance that you’ll die, playing Russian roulette. That’s basically safe, right?

If you say someone has a 2 in 3 chance of winning, that means that it’s a knife edge that just scarcely leans one direction but the slightest jostle or miscalculation could send it the other way.

It’s not saying that they will have 2/3rds of the vote. It’s saying that that if you roll a dice, they win if it’s a 1 or a 2. Go roll a dice and see if you reliably avoid a 1 and a 2. You’re going to get hit.

Based upon the way you count it, Silver was *never wrong* since he only predicted chances, not outcomes. Or, I could say- he was never right.

You have to compare his predictions over large numbers of contests. No one outcome gives you any sense for his accuracy.

A guy pulls a handgun, shoots a bullseye from 2000 feet away. He might be good. You don’t actually know that. If you saw him shoot 999 more shots and they all missed by a mile then he just got lucky on that first shot. If he makes another 900 bullseyes, yeah he’s real good. The one bullseye doesn’t tell you anything.

And if Silver said the guy’s going to hit 1 in 1000 shots, that aligns with the first case. That one perfect hit at the beginning doesn’t change how accurate Silver was. Sure, he looks like an idiot for the first shot, but he comes out smelling like a rose after the next 999.

Here are things Silver thinks are less than 1% likely to happen in this election:

Nobody gets 270 EV (either a 269-269 tie or RFKJ playing spoiler)

Trump wins the popular vote, but loses the Electoral College

The electoral map is exactly the same as it was in 2020

He thinks there is only a 3% chance that Trump will win the popular vote by double digits.

If any of those things happen, I will concede he was wrong, despite that being an innumerate statement. Although to be fair we should really look at his predictions as of election eve, not now.

The *highest* probability he gives any event in this election is a 79% chance that Trump wins at least one state that Biden won in 2020. 4:1 isn’t that overwhelming, but we’ll keep an eye on that, too.

The one I think is bizarre is that he still has RFKJ with slightly more than a 5% chance of winning some electoral votes. I would take that bet at 1:20 odds right now.

Neither. You can only assess that accuracy witha data sample. Which we have for Silver. His model works really well.

I don’t think he runs 538 anymore at all.