Nearest 10% might still be false precision, but yes obvious nearest 0.1% is. Although I doubt Silver thinks 0.1% moves are in any way significant. I think the finer gradations are shown in part as a means of transparency, to give some more feel for what affects the answer without getting into all the calculations.
It’s also irritatingly arrogant; I don’t know why Wang feels the need to sniff at Silver as a “hobbyist” who lacks the necessary “expertise” to use polynomials. I am pretty sure Nate Silver knows the math. Wang isn’t just being insulting, he’s being stupidly insulting.
538’s “win probability” is itself merely an approximation. There’s no point, from an explanatory perspective, to say “Clinton is 87.4% likely to win, plus or minus this MOE.” The very nature of stating it’s just a percentage chance is itself an admission of the uncertainty of the outcome.
Prince Election Project is basically saying there is no uncertainty; they’re pretty much calling the election now. 538 is saying “well, probably, but we’re not all that sure.”
The fact is in 2012, 538 nailed it. In 2008 they damn near nailed it. Whatever the criticisms one can throw at Nate Silver, he has done incredibly well so far.
Well Wang and Silver have had a bit of a nerd fight going and Silver is by no means blameless. Wang had a bit of a miss in 2014, I believe, but I’ve seen analysis that takes certitude into account where he’s done handily better than Silver. I’ve still never gotten a good answer on why Silver was quite so right in 2012 when his probabilities should have had a few “wrongs”.
Because he got a bit lucky, as I’m sure he’d freely admit.
So far he is 101/102 predicting states/DC in Presidential elections. (He missed Indiana in 2008.) That’s at least a little bit lucky.
His final forecast had the following probabilities:
Virginia: 79.4% Obama
Colorado: 79.7% Obama
North Carolina: 74.4% Romney
Florida: 50.3% Obama
Iowa: 84.3% Obama
Clearly he was a bit lucky Florida didn’t go the other way. But, see, if Romney had squeaked out a win in Florida, to my mind Silver would have been right. He predicted a toss up; a tiny Romney win is consistent with a toss up. If Obama had won Florida by nine points, I would argue he clearly got Florida WRONG, even though he had the state colored blue - he would have predicted a very close election in a state where the election was not at all close. But indeed it was quite close, less than 1% separated Obama and Romney; it was the closest-fought state, in fact.
Well, I’m certainly not the first to bring it up. Has he actually freely admitted he was lucky?
-
I agree there but partly because I don’t fully agree on some of the later points. It’s extremely difficult, probably impossible, to really verify election models IMO. Therefore which numerical tools the model writers use is a matter of art, and it’s mainly noise if they criticize one another, even with good grace.
-
I don’t necessarily agree here. If we know the statistical process underlying the event I agree. For example if we flip a ‘fair’ coin some number of times. It’s not certain how many outcomes will be heads, but there is a fixed probability it will be a certain number, no reason to apply an MOE to the probability. If instead the coin is bent to an unknown degree within some known limit (and we have data on how a given bend affects the distribution) then it is meaningful to have an MOE on the probability of a given number of heads, if the experiment is to be completed before we can modify the model by estimating the bend from the results.
Silver is estimating from a distribution we don’t really know at all, which probably changes over time, maybe a lot. So even lots of history is just taking results from different distributions (different bends of the coin) we don’t know.
I agree it would be cumbersome and confusing to give an MOE of the probability, and how to calculate it? So I’m not saying he should, but it’s not a categorically invalid concept just because it’s a probability. Again I’d say round to the nearest 10%, no MOE, and you might not imply false precision.
-
Which would seem clearly wrong in practical terms (Prince that is) since again that could only be a reasonable supposition based on a known statistical process. But maybe Prince says 'this of course doesn’t include polls being collectively out to lunch as in (including Ipsos not just locals) the Colombian peace referendum, ‘cause we don’t think that’s happening in the US’. Neither do I, or not likely, but impossible to quantify. OTOH again the potential problem with interpretation of a model like Silver’s is some people think it can somehow gauge probabilities of unprecedented but not necessarily extremely unlikely events, which is impossible.
-
Silver’s previous track record is enough to verify his apparent general seriousness and competence. It does not verify the % likelihood outputs of the model, for which you’d need loads more trials, to see if ‘15% likely to win’ candidates win 15% of the time (or within 5 to 20% of the time). You can’t get it without including smaller elections which might be different, and distant past big elections which also might be different, even so it’s not enough.
Monte Carlo methods certainly do inherently introduce noise, and that’s annoying. But the tradeoff is that they’re also often quite robust: A deterministic model is easier to sink by failing to take into account some obvious-only-in-retrospect factor.
@Corry El:
While there hasn’t been enough Presidential elections with modern polling to be certain of the various models, I think it’s still improper to compare it to referendums like the Columbian and Brexit ones. The stacks of studies, exit polls and such over a long time running two-party election battle is not like a single issue vote.
From the Preface of the updated version of his book, The Signal and the Noise:
I don’t have a problem with it in general, it’s a problem when he’s added the amount of movement to each Update line, so we ask ourselves in the above thread “why did a single poll result in Kentucky move the model 0.5 against Clinton?”
Could it also be that some older polls dropped off at the same time?
Thanks!
I realize the 538 model is designed in such a manner that it probably can’t get higher than 91-92 percent, but, really, stick a fork in Trump, folks, after tonight’s debate. He lost, and he had no more ground to lose.
I assume it’s over by now, but Silver was on Colbert. The clips will be up on YouTube soon. Might be interesting to watch the man behind the site.
The range of confidence vs uncertainty in various models remains high.
NowCast has gotten up to 86.2% but in comparison UpShot is 92% and per them the betting markets at 91%, and PEC Bayesian at 99%!
Nate certainly thinks so. Today’s column: Clinton Probably Finished Off Trump Last Night.
N/M
Sorry Wang, but with more undecideds than Clinton’s lead margin 99% is crazy.
On HuffPollster she’s up by 7 with 6.4 undecided. Real Clear has her up by 6 with about 7 undecided. All that breaking against her sounds like 1/100 to me, if not lower. Cheer up. Now you can safely vote Johnson.
RealClearPolitics shows a lot more undecided in their averages.