Well, Wang thinks he has it all figured out and has now made a rather firm prediction that Trump won’t get 240 EVs.
Yeah I think Wang’s already looking pretty silly on that one. I don’t mind the fact that he’s a 538 skeptic but at least hedge a little, lol. I think Wang’s argument all along has been that the polls are fundamentally stable and that nothing can change them because we’ve become tribalist automatons. I think Silver’s going to be proven right. I just hope that he doesn’t predict a Trump win on Monday night. But I’ve already mentally prepared myself for almost any scenario at this point, including a contested election. At some point, white conservatives and “moderates” are just going to have to wake up and stop buying the snake oil that republicans keep selling them in various packages.
No chance of him predicting a Trump win, although there’s a small chance Trump could have a slight edge on election night. But I think we’ll stay pretty stable, Clinton between 60-70%.
However, Silver does give Trump a much better than even chance to get more than 240 EVs, so we’ve got a very nice thing to compare Silver and Wang on. The current forecast is 246 EVs for Trump. My own prediction was 253.
Wang can be proven wrong if he says Trump won’t get X EVs.
How can Silver be proven wrong? He doesn’t make any predictions, does he?
Wang is also being proven wrong by claiming that the polls don’t change or that they’re stable. They’re not stable, and they’ve clearly changed. Probably not enough to give Trump the win, but certainly enough to make the election a lot less predictable than it was even a month ago.
Being right or wrong in an absolute prediction doesn’t tells you anything much about the predictor’s abilities. Given that there is always uncertainty about the future, the only sensible way of predicting it is probabilistically, as Silver does. The more pertinent question is whether his modeling is accurate. We can assess whether his modeling is accurate if we have a large enough sample of independent predictions and results. If we take all of his predictions surrounding this election, that’s a decent sized sample, but obviously the events are far from independent, although it’s somewhat informative - that’s why he made a name for himself with 49 of 50 states called right in 2008.
So what state did he get wrong, and what was the basis for saying he was wrong?
If he says a candidate’s chance is 49% (or 1% for that matter) and that candidate wins, was he wrong? Doesn’t he have infinite wiggle room?
- He points to the polls and says GIGO.
- He says he numbers are only probabilistic
- He says his model was off
Probably in that order.
Granted, the further away towards 1% from 49% the worse it looks.
Apologies if this has been discussed before, which I’m “predicting” there’s a 50.1% chance of.
I agree that if you want to be a real analyst of uncertain future events you don’t say what ‘will’ happen but try to estimate probability. Some people feel the opposite. It’s because IMO many people just don’t or can’t think probabilistically. ‘There will be an outcome finally, right?; if you can’t tell what it will be, what use are you?’
The same goes for financial markets, or sports though in the latter case assuming people aren’t gambling serious money it’s supposed to be a diversion from real life, so it’s more reasonable also to try to escape the reality that nobody knows the future.
I don’t agree as much on the testing of Silver’s or other election models. His track record and writing as well are sufficient IMO to establish that he’s a serious researcher into election outcome probabilities but that’s it. There are way (orders of magnitude) too few independent trials to say his current 35.5% reading is very close to or far from the real probability Trump will win the election.
And obviously the one election outcome will not clarify this much. If Wang is willing to say X will happen and it does/doesn’t, then he’s right/wrong. But it’s not really strong evidence he would be right/wrong next time. It’s true that Silver’s output is a better hedge against reputation risk. If Clinton wins he doesn’t look particularly bad at all: he’s saying the odds are in favor of it. And if Trump wins he doesn’t look nearly as bad as Wang. However I don’t believe that’s his intention, just how his numbers come out.
I think his point was not that the polls don’t change, but that change in the polls due to the scandal of the day does not mean change in actual votes, just that each candidates supporters are less likely to bother with polls when things aren’t looking well for their candidate. For example, was Trump really down 12% recently or were his supporters being fed a barrage of “the polls are rigged against me” BS simply stopped bothering with them?
Of course he does, he just couches in in probability. Sam Wang does too, he just claims the probability of a Clinton victory is close to 100 percent.
Silver’s model can be proven pretty inaccurate if the Election Day forecasts are pretty far from his predictions. If Clinton wins by an enormous margin, and Silver’s state by state vote estimates are WAY off, his model will clearly have failed. Silver presently estimates that Florida is a coin flip and that Trump should get 0.3% more votes. If Clinton wins by 0.2% of the votes, Silver’s estimate is still basically correct. (This could have happened in 2012; Silver had Obama ahead by a whisker, and that’s what happened. Had Romney won Florida by a whisker, Silver’s model would clearly have been correct; a tiny Romney victory would have been still have been consistent with his prediction.) If Trump wins by seven percent, though, Silver’s model failed at least in Florida, even though he has the state as going Trump. It should not go for Trump by that margin if his model is right. And if that sort of thing happens in many states, his model failed entirely.
The basis of Silver’s uncertainty is simply that he’s saying his estimates could all be pretty close and Trump could win. If Trump outperforms by a few percent from Silver’s estimates in the right states, Silver’s overall model is still pretty accurate, and Trump wins. But if the results are REALLY far from Silver’s estimates, his model is wrong, no matter who wins.
The Wang methodology can be proven wrong the same way; if the results are wildly different from his prediction - which isn’t as easily viewed as Silver’s but basically comes down to Clinton winning by about 2.6 percent.
To be honest, I think Sam Wang is wrong. He’s a smarter person than I am, and Clinton is a favourite, but his insistence that there is no uncertainly here is, to be honest, intellectually arrogant and stubborn to the point of being stupid; he’s invested in his model and refuses to see that it has obvious flaws:
- Wang continually makes the assumption that undecided and third party voters don’t matter. This is the opposite of his 2004 error, when he assumed that undecided voters would go 2-to-1 against the incumbent, because that had happened in elections before, and it ruined his prediction. Having been burned by that he now assumed that was has happened since then - undecideds go even - must always be correct.
He MAY be right. But he may also be wrong; it has not always been the case that undecideds/people who jump from third parties split evenly. Silver’s position on undecided voters is that he doesn’t know, that there’s more of them than usual, and therefore that is an unpredictable thing that introduces uncertainty. In my opinion Silver is obviously correct, and anyone who says they know for sure how undecided/third party voters will go - in an election where that is clearly a greater and different dynamic than in past elections - is crazy.
- Wang’s model relies on polls being correct and states being independent events. The former is probably true, but not an absolute certainty, in part because of Point 1 above and in part because in a tight election things like turnout in different demographic groups does matter. (Frighteningly, it appears black voters aren’t turning out like they did in 2008 and 2012. That could matter.)
The latter point, that states are all independent events,** is a fundamental part of Wang’s model and is the reason he is sure Clinton will win**, and if it were true it would make sense. Heck, if 538 assumed that, they would be calling a Clinton win already. It would be totally over. You can do the math yourself; going by 538’s state by state probabilities, if Trump has to win Ohio AND Florida AND Arizona AND Nevada AND North Carolina AND Iowa and then he’s still a bit short so he has to win Colorado, or New Hampshire and ME-2… well, his odds of doing those things are much, much lower than one percent. There are other ways he could win (he loses NC but wins Pennsylvania or something) but, if states are all independent events, it’s also wildly unlikely. If that were true, and his model assumes it is, Clinton should be preparing her acceptance speech and Trump is done like dinner.
But states are not independent events. That assumption is utterly, totally wrong, because of the uncertainties that Silver admits exist and Wang does not. If there is something about the polls that understates Trump’s turnout by 2 percent, that does not just happen in one state. North Carolina is not going to be a weird outlier. That effect will probably be true everywhere. If the polls are erroneous because of turnout/polling assumption error/enthusiasm/GOTV efforts/whatever, they will be generally erroneous everywhere.
Silver admits that that is hard to predict and he doesn’t know for sure what will happen. Wang refuses to admit that. If you’ve got two people, and one says he can predict the future and the other says “I can give you an educated guess, but I’m not sure,” believe the second guy.
Trust me; I wish Wang was right. Trump scares me. If he wins the election, we’re going to be setting up survival stocks of food, water filters, and medical supplies in our crawlspace, because he could start a depression or a war. I desperately wish I could believe Wang, but I can’t.
Of course the independence assumption is wrong. Voters in all the states are watching the same news, and hear the same facts and fictions coming from the mouths of pundits and FBI traitors.
I’ve paid no attention to Silver or Wang, but Wang’s model must be huuugely flawed if he treats states’ votes as independent.
I was thoughtless in describing Silver’s reputation as being based on “calling 49/50 states correctly” - that’s a terrible example, because that would be based on making 50 asbsolute calls “X will happen”, rather than stating probabilities - exactly the thing I was saying he generally does not (and should not) do.
The only way to judge if somebody’s probabilistic forecasting methods are sound is to have a large enough sample to know if things that they say have (say) a 30% chance of occurring do occur around 30% of the time.
A good example is sports betting, where you’re essentially saying “I can predict probabilities better than the marketplace”. Right now, I think Trump is around 4.0 on Betfair (odds 3/1), implying a 25% chance. If I think he has a greater chance, I will back him at 4.0, if I think he has a lesser chance, I will lay him at 4.0. In the long run, if my forecasting of probabiltiies is better than the market, I will make money by making many such bets at market odds, but in any one instance I may either win or lose.
ETA: and clearly, the key to making money in the long run in betting is not calling the result correctly, it is calling the probability correctly.
No.
He does not.
The subject here is about mathematical calibration over a larger sample size.
If I make a thousand different 50.1-49.9 guesses, and it turns out that the side I chose to be slightly leaning ends up winning 80% of the time, then that means that I am badly miscalibrated on my toss-up guesses. Now, it’s absolutely true that this requires a higher sample size than a single election. A person who states 99% confidence for a single guess can be seen to be miscalibrated from a single bad prediction, whereas it takes many more elections to judge whether a bad 60-40 call was miscalibrated. But notice, again, that evaluating correct calibration can cut you in either direction. If I make fifty different 60-40 calls (for instance, for fifty different state elections), and it ends up that the side I choose as 60 ends up winning all fifty calls… that is a strong indication that I’m miscalibrated to understate what should be the “true” confidence level.
On preview, I see Riemann has already gotten into this. But the idea is worth repeating.
We shouldn’t be looking at one national election to see how well these two are calibrated. We should be looking at individual states to see larger sample sizes. But there is NOT infinite “wiggle room” here. There is finite wiggle room. And it is very possible that Wang will end up looking far too overconfident, and Silver will end up looking far too underconfident based on final predictions.
Forget Nate Silver. Here’s the Lendervedder Forecast, based on latest polls and forecasts in each of the 9 current toss-ups (NV, CO, WI, MI, ME, NH, PA, NC, FL), and the aggregate of the six major forecasters.
The most-likely EV count:
Hillary 323
Trump 215
Clinton 89% chance of winning.
If the polls are 100% accurate, there’s a 50% chance my predictions are spot-on 10% of the time. Bank it.
Well, sure I get that, I go to Vegas. The house makes money cause it has an edge on every decision, ie. the payout to the player on a win is less than the probability.
And BTW, the key to making money (or losing less) is giving the house the smallest edge possible, and quitting after a streak of variance in your favor. And having enough money so you don’t bust out until that streak happens. ![]()
Well, the house charges what they charge - you can’t “minimize” it (other than by shopping around if there’s more than one way to bet). I think the key is more to be highly selective in your betting. I tend always to look at net implied odds (after subtracting house edge, commission, state tax or whatever from the payout) and bet only if I think the net implied probability is significantly wrong. That usually happens only when the weight of money from other bettors is pushing the market to the wrong level. And since the house edge or commission is quite large, it’s rare that prices get far enough out of whack that there’s a worthwhile opportunity.
I assume you’re joking here, right? “…quitting after a streak of variance in your favor” is a well known statistical fallacy.
Take craps for example. The house edge on a pass line bet is 1.41% But once the point (winning target number) is established, they let you place a supplemental “true odds” bet of up to 5x the original bet, and which pays exactly to the probability. Which can effectively reduce the edge on the total amount bet to 0.37%
If I sit down at a game with a 50% win probability and win the first 10 in a row, I’m outta there.
It’s elementary statistics that a strategy “if I make X I will leave” does not have a positive expected return in a fair game, for any X. It’s similar to Martingale, and equally wrong.
We’d better move this to a different thread if you’d like to discussing gambling further, we shouldn’t take this further off topic as it’s a very active thread.
Oh sure, I get that. Martingale is like telling a kid not to touch a hot stove. People have to get badly burned first, then they never do it again. You can show them your own burn marks, doesn’t matter.
But I agree we should end hijack, it only makes me want to get on a plane.