Nate Silver / 538 Was Right

I think you might be mistaken about what Nate means by “model”. What he set up in the spring and didn’t later touch was an algorithm to process polls and other data points as they came to exist in subsequent months, not a fixed prediction from the start.

The fact that some of the early polls, for some races (including the presidency), turned out to resemble some of the late polls (and the election itself), is neither here nor there – although it’s an interesting topic of its own.

Put it this way, the guy had nothing to lose. He was telling a certain segment of the population what they wanted to hear and reveling in the attention. As a calculated risk, this makes sense. After weeks of blustering idiocy and predictions of 350+ electoral votes for Romney, his final prediction settled to, what. Oh: 275 to 263, Romney. How convenient.

So, now, it’s a plausible scenario. Low odds, but within the realm of possibility. If he’s right through sheer, dumb luck (and 275-263 was a possible scenario in Nate Silver’s model), all of a sudden Dean’s the hero, the master prognosticator, and that nerdy Nate Silver with his “new castrati voice” gets gets exposed for the lying liberal scumbag that he is. Right? I mean, why not roll the dice on that? Nobody’s going to remember the crazy shit you were predicting a week before the election (359-179 Romney, 51-48 Michigan). You could always just blame the new, more realistic numbers, on Sandy or something else. They’ll just remember that you were right, Nate was wrong, and you’re well on your way to a nice paycheck somewhere. Hell, in terms of risk-to-reward, why not? You’ve got anywhere from a 3-10% chance of being right.

Hell, Rick. Nate Silver is the Nate Silver of political predictions. The stuff he did with the Baseball Prospectus (on which he and I corresponded twice) were totally groundbreaking.

Bill James gets first-mover credit, true. But he got nuked when real math geeks moved in. Silver brought true wonkiness to popular sabrmetrics.

While we’re all talking about Silver, don’t forget Sam Wang. His more straight-forward meta-analytic approach works pretty darned well, maybe even better. The next step is to become a meta-analyst of the meta-analysts.

I completely agree with your post, other than the fact that I think you might be mistaken about what I meant in my post. I was not criticizing Nate or his model in that post.

To review: DigitalC suggested that while Nate got the results right, perhaps his model didn’t add much to a simple average of the final 10 polls. SlackerInc responded (post #61) that this may be so, but the significance of Nate’s model was that he predicted the final results months in advance. And he illustrated this by pointing to a Nate’s graph of his popular vote predictions.

My response to this was to essentially state the assertion that you make: “The fact that some of the early polls, for some races (including the presidency), turned out to resemble some of the late polls (and the election itself), is neither here nor there”. That is all.

Wang appears to have gotten the popular vote margins exactly correct, and may have predicted the Senate races better than 538 in aggregate as well. That is very impressive. I have heard that he predicted the electoral college exactly in 2004 and was off by one vote in 2008. This year he predicted 313, which is exactly what 538 forecasted this year, and is essentially the midpoint of all swing states minus Florida and all swing states plus Florida. Both models were extremely accurate.

I’m not even sure a meta-analysis is even necessary! It’s not like this was a very easy-to-predict election.

Not quite. According to his FAQ:

So, no. He predicted a narrow Kerry win. In hindsight, he should not have compensated for this “incumbent rule.” But that’s in hindsight. Sorry, that’s a miss for me.

It’s possible his model forecasted that, after weighing likelihoods of events and severity of impact, everything occurring between his early iterations and the final election would essentially average out to zero. That is, the odds of something significant affecting one campaign were outweighed by the odds of something significant happening to the other campaign, meaning no difference in aggregate.

Whether this is true or even a reasonable assumption is left as an exercise to the reader, but it is certianly a possibility.

Ah, gotcha. I think Deadspin’s article on Sabermetrics and polling analysis misled me in this regard.

That’s still impressive that he recognized the incumbent rule might not have been a reasonable assumption, and his exact same model without this rule actually did predict the electoral college dead on. I do agree he doesn’t get full credit though.

I suppose his one vote miss last election was the Nebraska congressional district, right?

It’s theoretically possible (though highly dubious, IMO).

But no difference. Because it’s an assumption. You can’t prove the value of his model based on this assumption. That would be completely circular (besides being speculative).

is the U of colorado model!

No problem - I’m sure they can re-jig it for next election and have another model that totally, absolutely, no fooling this time will predict a Republican landslide.

I’m not sure, but here’s Sam’s scorecard on 2008. I am allured by the thesis: “Overall, the results show that a high degree of accuracy is possible without complex model-building.”

But…but…but it successfully predicted, after the fact, all presidential elections since 1980! How could it go wrong–its retroactive predictions were flawless!

My “most charismatic” has predicted all presidential elections from 1976 on.

I claim dibs on that theory. :slight_smile:

Although I only know of it going back to 1980.

Okay, my mistake.

Being off by one EV means you did poorly, rather than well. Being off by one means you got two states wrong that happen to be one EV apart (unless you somehow failed to predict how Nebraska would go).

Maybe his prediction for the popular vote was off by one. Not too shabby. :slight_smile:

Yes, that would be rather excellent.