Trump could win the election in a nowcast by FiveThirtyEight

538 is estimating a 6-10 point bounce, so in line with Wang.

Of course we will really only begin to understand how the race is settling in post-conventions after we have a good round of state polling. That said the last Missouri poll is very encouraging. That is a state that Obama lost by 9 in 2012 and pretty consistently polled behind in. It hasn’t gone D since WJ Clinton. It’s just one poll but being Clinton +1 there is pretty big stuff if real.

Bush Sr. trailed Dukakis, in August, by 7%. Bush Jr. trailed Gore, in September, by 10%. Obama trailed McCain, in September, by 5%.

Either Trump is massively over-performing or Hillary just can’t get it together.

Not only that, but IIRC Mondale briefly surged ahead of Reagan by a few points after the 1984 DNC.

I’m not sure where you’re getting these numbers. Silver shows the leads 30 days after the conventions for all elections since 1972 on this page. Your numbers are much larger than his and sometimes have the lead wrong.

Those are based solely on Gallup polls. They are not averaged for all polls. That’s why 538 gets much more accurate results.

I wouldn’t use those results unless you looking for an example of something that is specific but not accurate.

New from 538

Polls Only: Hillary 53.3, Trump 46.7
Now-Cast: Hillary 63.6, Trump 36.4
Polls-Plus: Hillary 61.8, Trump 38.1

Silver says that they’re still getting polls from post-DNC so there could be more movement in the near-term but that it appears Hillary got a better bouncethan Trump.

Looks good but hardly time to relax.

Love his description of the three forecasts’ personalities in there… watching the Nowcast’s ADHD is definitely making me ADHD…

Case in point: nowcast’s just jumped to 78% Hillary…

And now it’s 82.2%, just about a half hour later. The big moves are Florida and Ohio going light blue, when they were previously light red.

“Taste the sweetness of Destiny, Fascist pig!”

  • Trashman

Out of curiosity, and I assume he would have done this, but has Nate ever analyzed how accurate his probabilities have been? It almost seems to me like he’s perhaps been a bit conservative. So, he’s had some notoriety for only get one or two states/DC wrong in the last election, but that’s not the correct metric for prediction, as he also assigns numerical probabilities. So, say you have 10 states with 60-40 probabilities and Nate predicts the winner in all 10 states correctly. That should be good, but you’d expect, on average, him to only get 6 of those states correct if his probabilities are accurate. Getting all 10 correct with those probabilities would be something like 0.01% by chance, if I’m doing my math right (.4^10). I’m sorry if I’m not phrasing this correctly, but does someone understand what I’m getting at? The model shouldn’t be judged by how many correct winners for each state it predicted, but also against the probabilities given. I suspect he’s done something like this, as this isn’t exactly a novel or clever point, but if someone can lead me to it, I’d appreciate it.

He did this with his final forecast last time (in late Oct/Nov) and found he was wrong about as often as expected, I think. But he hasn’t to my knowledge tested it this precisely with predictions this far out = except to say in general, things at this point aren’t too predictive, and stability comes over the next 30 days.

Of course, none of the swing states except Penn have seen any post-convention polling at all, so this is all driven by national polls applied to state demographics… direct state polls should lock it down a fair amount.

I don’t recall any full analysis of this question on 538, but Nate has addressed it a couple times. In his mea culpa article after the primaries “How I Acted Like A Pundit And Screwed Up On Donald Trump” he says, “The FiveThirtyEight “polls-only” model has correctly predicted the winner in 52 of 57 (91 percent) primaries and caucuses so far in 2016, and our related “polls-plus” model has gone 51-for-57 (89 percent). Furthermore, the forecasts have been well-calibrated, meaning that upsets have occurred about as often as they’re supposed to but not more often.”

Judging the calibration of his probabilities in the general election is much more difficult. While in the primaries the probabilities of winning various states might be at least somewhat independent, in the general election the state-level probabilities are very interdependent. If the 538 model is significantly off it will most likely be off in the same direction throughout, or at least throughout states with similar demographics. Given that, we shouldn’t expect the model to miss 4 out of 10 states predicted at 60%. If it gets one right, it will quite possibly get them all right, or all but the very closest. To accurately assess the calibration of the win probabilities would require a much larger sample size than we have available at the federal level.

For what it’s worth, I agree with the general preference for simpler models, and for ones that don’t overreact to short term noise. Among Silver’s models, it seems that “Polls Plus” overreacts the least, but it’s definitely far from simple, and even his “Polls Only” model is more complex than Wang’s.

That said, can you clarify your objection to Silver’s criticism of Wang? I think you’re referring to the fact that Wang gives the distribution of outcomes if the election were held today, and then separately projects the probability of a win in November based on assuming random drift, as well as by taking a Bayesian approach. So if I read you right, you’re saying “Silver is falsely characterizing Wang’s prediction for ‘What would happen if the election were today’ as a prediction for ‘what would happen in November’.”

That may be, but even so, when we reach election day, Wang’s prediction for “what would happen today” will be his prediction for the election - so I think it’s fair to ask “Does his methodology give too high a level of confidence in what would happen if the election were today?” As he says on the page I linked above “At nearly all times, this snapshot shows a very likely win for one candidate or the other […] usually greater than 99%”. The way I read Silver’s criticism is “If, historically, polling that (in aggregate) suggested one candidate had a lead of this amount on election day ended up getting it wrong substantially more than 1% of the time, then your model is flawed if it claims 99% certainty.” That seems a fair point to me, and if you think otherwise I’d love to know why.

Beyond that, looking at the details of Wang’s methodology, it seems that he takes the median of recent polls in a state to determine the probability of a candidate winning the state, and then he calculates the probability of a candidate winning the election using a formula that treats each state-level probability as independent. In reality, as Silver has noted, polls in multiple states tend to err in the same direction. Wang has a bit near the bottom of the page that claims this wouldn’t matter (or would matter very little), and in particular he goes into more detail here.

Basically, he states (without justification that I can see) that for the election-is-held-today snapshot, an upper bound for the effect of the polls being off in one direction is given by assuming every poll is off by 1% in that direction. But then he doesn’t actually compute that supposed upper bound, but instead computes the average of how much the snapshot probabilities change when the polls are uniformly biased by an amount between +1% and -1%. And from the fact that this is small, he concludes “for a snapshot, covariation doesn’t matter.” This seems to me to be clearly wrong. He seems to be arguing “the maximum realistic effect of correlated errors is small”, but then he didn’t find the maximum effect of all errors pointing in the same direction, but rather the average effect of all errors pointing in some random direction.

In fact, if he’s trying to say “correlated errors won’t matter”, I think it would make more sense to say “I’m satisfied if my probability estimate is within X percentage points of what it should be”, and then to show that “the probability of correlated errors in polling among the states being large enough to change my probability estimate by at least X is [some small value]”. But I can’t see anywhere where Wang actually tries calculate the probability of a given amount of correlated error in the polling (something I believe Silver does by looking at the historical amount of error between polls and actual election day results).

Of course, some of this is not convention bounce. It’s Trump choking on Captain Khan’s Purple Heart. He may have finally found a way to convince people of who he really is.

Holy crap, look at those curves on the Now-Cast.

Propriety prevents me from describing what Trump’s curve resembles.