Polling as November 2020 nears: Will it be more accurate than before November 2016?

IMHO, one reason many sources picked Hillary so overwhelmingly was because the idea of Trump winning ***sounded ***ludicrous to their ears. If it had been any other candidate - say, Romney vs. Hillary, but with the exact same data - they wouldn’t have given Clinton 99%; they would have lowered her odds. But the idea of Trump winning sound like the idea of President Whoopi Goldberg or President Kanye West; unfathomable.
But anyway, I didn’t mean this thread to be another postmortem analysis about 2016; we’ve had many of those - I wanted to ask what the pollsters and analysts will do different this next upcoming time around.

It wasn’t pollsters, again. It was predictors. The ones with sense will rely on the models with proven records, like Nate Silver’s.

Yep, this is why Nate was giving Trump 30% when everyone else was giving him 1%. They were all saying “He would have to win PA, MI, AND WI! What are the odds of that?” He was saying “If he wins PA, he’s probably going to win MI and WI too.”

I’m actually very curious to see how models will change in reaction.

Well, a lot of people are dumb.

Your first link is to a story about a study done by Sam Wang, who was getting a lot of media attention in the last election but is an idiot. The only real problem with this story is the clickbait headline; the story itself explains that other predictors were less optimistic.

No idea what the “HuffPo Model” is, but clearly it sucks.

The third link discusses prediction markets, not polls.

And the fourth is to a student project which commits the basic error EDUB explains in post 15. At least, unlike HuffPo, they explain their methodology clearly enough that it’s easy to see where they went wrong.

The problem is that a lot of the media folks who are reporting the poll results aren’t good at math. I don’t see this significantly improving in the near future, but we can hope.

I think it’s absolutely true that a lot of media coverage was biased due to reporters being unable to imagine that Trump could actually win; Nate Silver had a series of articles about this last year. The New York Times was particularly egregious. But that shouldn’t be an excuse if you’re claiming to be using mathematical models to interpret polling data; the subjective opinion of the person running the models shouldn’t affect those results at all.

None of those are polls. Those are people interpreting polls and trying to guess odds. The fact that polls were way out is a myth.

So established - national polling was actually quite good, with a few critical states being a little off. Many pundits were not good at speculating odds based on the fairly good polling.

Will pundits and talking heads in aggregate do a better job of understanding how to interpret the numbers this time?

FWIW, I think what Nate Silver found in his post-election analysis that the systematic bias in polls is that they tended to underestimate Trump’s strength in states with a larger percentage of white, non-college-educated voters and tended to overestimate Trump’s strength in states with a smaller percentage of white, non-college-educated voters. So, overall, nationwide, it was pretty much of a wash. But, it turned out in the states that mattered, i.e., the battleground states, the proportion of white, non-college-educated voters was higher than for the nation as a whole.

Excellent summary.

Let’s just say my expectations aren’t high. :smiley:

Nate Cohn talks about shy Trump voters (there seem to be few of them now) in this piece:

People are biased. People are blind to their biases. No, I don’t expect that to change any time soon.

Tangential question about polls: Are they ever subject to being audited by an outside entity? What prevents a survey group from simply fabricating BS polling data? (claiming, “we surveyed 2,765 likely voters in an effective sample size by telephone” when in fact they did no such thing and just spun up BS stuff on the spot). As long as their “data” isn’t too far off from all of the other Gallup polls, who would question it?

What would be the point in making up a bunch of data if they’re just going to match what everyone else is saying?

Some anecdotes (both 538 as source) …

Research 2000.

More broadly about the problem of fake polls.

Save trouble and energy, money and resources - and also have the opportunity to further skew things in favor of or against someone.

Nate Silver rates polling outfits based on history, methodology, and transparency. If a brand new poll came about with no history, methodology, or transparency, he’d likely pay it very little attention and give it an F rating.