So will anybody ever trust the polls again?

They still do updates.

“Joe Biden is the apparent winner of Pennsylvania.”

This…

Is not the same thing as this…

You’re being ridiculous.

Their forecast model, which I am comparing to a forecast model derived from PredictIt data, was frozen on 11/3.

You’re right, the actual probability is .5%. 538 goes by ABC projections, which I believe is 99.5% chance for a projected win.

The issue some posters here have is that you’re examining a single event that happened in the past. We have an opportunity to compare 538’s current projection with that of PredicitIt against a future events, the actual winners of each state. That could help us determine if there has been some type of bias in PredictIt’s markets.

No we don’t. 538 froze their forecast model on 11/3.

I did not say “forecast model”, I said “current projection”. They are not the same.

Now you’re moving goalposts. Give it up.

There is a heavy bias toward Trump at PredictIt. We have a chance to examine more evidence with your square root thingie with current projections. You refuse. I give up.

Peace.

The problem with polls is that they are a snapshot of opinion at the time of polling and trying to project forward from them is a bit less reliable.

But I do think polling companies should adopt an econometrics approach to their numbers (maybe they already do). i.e. adjusting the polling results by age, gender, educational level and even what news networks people watch to then extrapolate out for imbalances in the types of people that might have refused to answer or been missed - however there are only so many questions one can ask somebody.

The most reliable polls are exit polls and usually are great at predicting what is going to happen. I couldn’t believe the exit polls in the US asked various lifestyle questions but seemed to studiously avoid asking “How did you vote?”. Did the networks not want to ruin the results?

The question about whether betting markets predict political outcomes seems like it should be something that has been well researched. Betting markets have been around a long time and political outcomes are very important, so I would expect that it would have a clear answer. There should be a ton of historical data that could be considered. It’s not like this election was the first one to have betting. Even if PredictIt was better than 538 in this election, that is just one data point and may or may not be statistically relevant. It seems like the data for all betting markets in all political races could be examined to determine if there is a correlation there or not.

I linked to one such paper in post #226. That paper showed that Intrade outperformed 538 in 2008. The references in that paper contain a lot of research that shows prediction markets are as good as or even slightly better than polls.

2020 was not an exception in this regard.

Evaluating a predictor on the basis of two predictions is ridiculous. You need a long history of someone’s predictions before you can tell how good they are at predictions/forecasting. I suggest that everyone read Superforecasting by Dan Gardner and Philip E. Tetlock. You need to look at many predictions before you say anything. First, you should use the probability that they are giving for their prediction of an event, not the percentage of the vote by which one candidate would win or lose. 538 predicted that there was a 75% chance that Clinton would beat Trump and a 90% chance that Biden would beat Trump (in both cases, in the electoral vote, not the popular vote). Each prediction has to be about a specific event and a specific time. So you can’t say that someone will win and then say that they almost won, so you get some points for that. Furthermore, you can’t say that they didn’t win this time, but they will in some future time, so they get some points for that.

Then you use the Brier score to tell how good a specific prediction is. For this, you need to insert into a formula what the predictor predicted and what probability they gave for their prediction. This gives you a score for that prediction. Then you need to look at a lot of predictions by that predictor. I would never try to evaluate a predictor before they had made at least 100 predictions and those predictions were checked against what against what actually happened. You then average the Brier scores for each predictor to tell how good they are on average for their predictions.

The methods in the book Superforecasting were created because Tetlock spent many years asking many experts in various fields to make predictions. He got at least 100 predictions for at least 100 experts. He used the Brier score to tell how good each expert was at prediction. The answer was that they were no better than random at their predictions.

So if 538 predicted that one candidate had a 75% chance of winning in one election (and was wrong) and that in another election 538 predicted that another candidate had a 90% chance of winning (and was right), there’s no way to use just those two predictions to tell how good the predictor was on average. You have to give us hundreds of predictions (and how they turned out) by that one predictor before you can evaluate their skill at prediction. So give us a list of hundreds of predictions and outcomes by a single predictor (each of which is specific in what is predicted and at what time and you can’t combine the lists of several predictors) along with the outcomes of those predictions. Only then can we say whether that specific predictor is skilled at prediction.

One thing I’ve noticed many times over the years is that very frequently the actual results vary from the final polls in the direction of recent poll movement. Meaning, if the polls show the race tightening in the final days, then the lagging candidate is likely to exceed their poll numbers, and vice versa.

I’ve assumed this is because late momentum is not being captured by the final polls, which are lagged. IOW, whatever is happening to change the most recent polls as compared to the prior polls continues to happen after the final polls are taken.

RCP has an up/down arrow next to their aggregate poll averages, which is useful in this regard. But in this case, the final polls showed the battleground state averages closing, but the overall popular vote numbers more stable, and it looks like Trump was above the averages for both.

Yes it’s certainly the case with Brexit. The polls showed remain firmly in the lead from the very start (all the way into the years before) but in the final weeks they got tighter and tighter with one shock poll showing leave with a slight lead about a week before the actual vote. Everybody was shocked at the eventual result but the polls had clearly signposted it, even though only one out of about a hundred actually matched the eventual result.

So all this discussion of prediction markets prompts me to ask: do bettors in these markets base their bets primarily on something other than polls, or do they just aggregate the existing polls? Because it strikes me that:

  1. just like betting on your hometown or favorite sports team, betting on your favorite candidate, polls be d*mned, is fairly harmless fun to make the election even more interesting ( :face_with_raised_eyebrow: as if deciding who runs the country isn’t already high-stakes enough to be interesting) as long as you don’t bet too much. I can see myself betting $25 on a candidate, but it would signify no more than he or she is my favored candidate.
  1. If there are people willing to give false answers to pollsters, those same people should also be willing to make a modest bet to make their favorite candidate look even bigger than he/she is. If Trumpsters are willing to buy flags and drive monster trucks around just to show their support, why wouldn’t they put $25, or $100, or whatever, on Trump in the prediction market? In other words, “putting your money where your mouth is” doesn’t make your mouth right, just earnest.

  2. Bettors who have no favorite candidates but are merely trying to make money presumably use the polls in some way to reach their decision who to bet on, like horse-racing gamblers using the stats in the Racing Form, etc. But if the polls are unreliable as some argue, aren’t the decisions of such gamblers “garbage in, garbage out”? And if they “unskew” or “adjust” the polls because they deem the polls biased (the equivalent of gamblers with their own “system” they think superior), aren’t they just introducing another form of bias?

“illiquid” may be the wrong term, since obviously anyone currently in the market is able to buy and sell efficiently. But not allowing any new entrants into a market is going to severely restrict the usefulness of the market as a prediction. Because people with new information that might move the market can’t do so.

PredictIt limits each market to a maximum of 5,000 traders. That limit is part of a legal arrangement in which they remain a toy market that is exempted from some laws against online gambling/money moving. So in this case their interest in not being regulated into the ground is greater than their interest in being a good predictor or maximizing their income.

But, seriously, which do you think is more likely: My claim that PredictIt is a questionable predictor due to artificial market limits or the current PredictIt market that suggests that Trump has a 13% chance of winning the Presidency days after every major news organization called it for Biden?

I would do so, except that I already got a refund for my deposit and I’m not that interested in keeping money in PredictIt that I can’t bet, nor am I totally sure that they’d give me another refund for exactly the same issue. And there’s no reason to think that they are letting new market participants in because if they were then Trump shares would have dropped like stone.

No no no. During the election it was overpricing Trump’s chances compared to other predictions because it had a better mix of Americans (including some die-hard Trump fans) than polls did. That made it a better predictor.

It’s currently overpricing Trump’s chances because they’re not letting new market entrants fix their pricing. If they were, I’d go buy up the maximum allowed number of shares of Biden at 87% and make a nice 13% profit in a few months. And so would everyone I tell about it.

There’s an old joke about two economists who are walking along and see a $20 bill on the sidewalk and don’t bother to pick it up. And they say “there’s no way there’s really a $20 bill on the ground. If there were, someone would have picked it up!”

Your response reminds me of that. There’s 13% on the ground at PredictIt, but no one is allowed in to pick it up.

Your language in second option is hard to parse. What’s more likely is that the PredictIt market is skewed pro-Trump, as I’ve been saying. (No doubt many of these traders are skeptical of “every major news organization”.)

You’ve repeated some version of this several times in your post but it makes no sense at all, best as I can tell.

Regardless of whether PredictIt is letting new people in, there are certainly some people there, who are actively betting on all these races, and the price action is a consensus that Trump is about 15% or so. What’s the big difference between “new entrants” and old entrants? Your language seems to imply that the “new entrants” are the only ones who are aware of the current state of affairs, and it’s only they who would take the money off the fools who think Trump is at 15%, while anyone who got in before PredictIt closed the market is somehow impervious to any of this new information. This is obviously ridiculous, but it’s hard to figure out what else you might be saying.

Nate Silver’s polls post mortem:

It’s long, and here’s a good summary quote:

Assuming current results hold, the only states where presidential polling averages got the winners wrong will be Florida, North Carolina and the 2nd Congressional District in Maine. And because Biden’s leads in those states were narrow to begin with, they weren’t huge upsets.

Which is totally bonkers.

A significant grasp of empirical reality?

I expect that a significant number of current market participants are not active in the market. They put $10 on Trump two years ago and forgot about it, or they’ve already maxed out how much they can bet (so they can’t make new bets to move the market). I’m not sure exactly why they’re not. I am sure that days after it has become abundantly clear that Trump did not win the election, a 15% chance of him winning the election is absurd.

The reasonable explanation for this is that there’s no one who is allowed to take the suckers’ money and put the odds down to 1% (which I think is the lowest a market can go in PredictIt for rounding reasons). This theory is strongly supported by the fact that when I tried to take the suckers’ money, I was not allowed to do so because PredictIt has rules that make the market illiquid.

There’s almost no reason to pay attention to PredictIt after the polls close on election day, and certainly no reason to do so after the race was called November 7.

After that point it’s just shit stirrers and pump and dumpers. People joining after that point we’re almost all Trumpers getting in when they could buy Trump cheap until the market filled up.

I am lucky enough to be able to take their money since I was already in one of these markets, but now I’m maxed out there.

But seriously, there is nothing to learn from looking at PredictIt right now, and the state of things now tells us nothing about the predictive value of PredictIt.