Yep. I was the nervous towards the end, especially when I saw Nate’s numbers and, well, it looked like I had good reason to. A 70% chance of winning isn’t that comforting when we saw what we ended up with.
I sure wouldn’t say that. Here’s the thing: let’s pretend for a moment there are 100 state-level races, and, for simplicity, let’s say every single race has a front runner that the model reports has a 70% chance of winning. If the model is correct, then somewhere around 30 of those races should be losses for the model to be correct and accurate. If the model correctly predicts all 100 races, then the model is severely underestimating the probability of the frontrunner’s win (or is extremely, extremely, extremely, extremely lucky. It would be like rolling a 10-sided die a hundred times and never having 1, 2, or 3 show up.) I wouldn’t consider that a very accurate model that describes the state of the election, but one that severely underestimates the chances of winning.
I’m sure’s he’s done it somewhere, but I would judge the accuracy of Nate’s model or any model not based on whether it predicted the winner in a single race, but rather whether overall the “winning percentage” of the model matches up with the predicted winning percentage of the candidates.
Yep, I for one was freaking out that 538 was giving Trump a 1 in 3 or 1 in 4 chance of winning. I’ve done a fair amount of work with and within random number generators to have a good sense for how bad those odds actually were for the sanity of the world.
I was, too, but I was trying to tell myself it couldn’t happen, and hoping that Nate’s numbers were off, since there were others out there.
What really bugs me is that I don’t know if he was off. We can’t go back and run the election multiple times to see if his percentages are right. Even the 95% chance I was hoping for meant that Trump could win 1 in 20 times.
No, but you can look at all of the results of the model versus reality. If, over a large enough sample, 538’s predictions hold close to actual results, you can probably trust the model. If 70% of elections his model predicts at 70% end in the affirmative, you’re probably relatively safe to trust the results - as long as you take the actual meaning of probability into account.