But how would presenting inaccurate data serve such an agenda? Aren’t polls showing “your guy” ahead just as likely to induce complacency on your side as they are to generate a “bandwagon effect”?
Are you able to point me to where they apply anything subjective, other than banning those that they suspect of outright faking results? I’m not sure I’d count their bannings as particularly subjective, as they don’t just ban everyone with a bias. As an example, one of their banned pollsters was Research 2000, which quite frankly deserved to be banned, as it definitely appears that they just made shit up.
But it didn’t really answer my question: how and why are internal polls different than the public polls?
They are carried out by the same companies (Gallup, etc), using presumably the same techniques.
No to the last part of that. Any poll involves all sort of assumptions and guesses and predictions about various groups will vote as well as if they vote. An internal poll will often have different guesses and predictions.
For example: it is often underappreciated that white people do not all vote the same. Poles, Swedes, Italians, etc have identifiable voting differences among them that do not show up in polls that treat “white people” as all one group. (e.g. Norwegians are antiwar) If you aren’t adjusting your “white” share of the vote to account for ethnicity, your poll will be off.
There are a hundred of those kinds of nuance, and every campaign has a specific “theory of the race” that includes them, and which they incorporate into their models. e.g. “we have a really great ground game with black churches. Build us a model that assumes black churchgoers will turn out at 1.5x the usual rate”
And if you’re starting to think “wait, that all makes it sound polling is more art than science” … yes.
It’s not totally fair but 538 is a convenient target of my wrath because they represent all of the pollsters. One thing we can hit Silver on, though, is his knock on Trafalgar. It sure seems like Trafalgar needs to have more weight on Silver’s model.
At least for now I’m done with 538. They’ll need to show they’re getting better instead of worse.
I don’t have a link ATM, but he’s said before that he values some methodologies over others.
Compare the accuracy figures of Public Policy Polling or YouGov (grade B) to his A+ pollsters.
Sure, but his opinion on what makes a good methodology appears to be wrong. I’d be interested to hear who’s been more accurate in 2016 and 2020: Trafalgar or 538.
Maybe we should wait for the actual vote totals to decide that the polls were terrible? Right now, it appears that Biden is more likely than not to win, so 538’s 90% prediction would actually be correct. Yes, the state-level predictions were off a lot in Florida and some other places, but we won’t know if the national polls were actually way off until the results are all in. State predictions are extrapolated from pretty thin data.
The fact is that it’s really hard to get good data. The vast majority of people don’t answer their phone when you try to poll them.
Do you have a cite for this? My impression is that Nate Silver is measurably more accurate than the (laregely bullshit) pundits that interpreted polls prior to his ascension. But we don’t have to trade assumptions. He has a publicly verifiable track record and it covers hundreds of elections and outcomes. That doesn’t make his predictions certainties, but it makes them much better than most previous alternatives. These days, there are many other analysis groups that do something similar and plausibly better, but does anyone else have a publicly verifiable history of accuracy that Silver and his group do?
Yes, it is true that things could change but my understanding was that if you adjusted 538’s polling average by the error rate from 2016 that Biden still won more than 270 ECVs. That seems to indicate that 538 is off even more than they were in 2016 even if Biden wins. That’s a bad trend.
How do polls compensate for outright deception?
Maybe historically it was a wash. Now one side is less honest than the other. Who knew?
I don’t think they do. There was a lot of discussion that I read looking at the evidence for and against the ‘shy-Trumper’ hypothesis, and most of the evidence on the anti side was convincing. But I think it’s probably worth digging into a different hypothesis: the ‘actively-hostile-to-pollsters-Trumper’ who lies not out of shame, but out of a desire to throw a monkey-wrench in the process for some reason.
Anyone who was awake over the last few years.
Exactly. It’s like someone playing the lottery for the first time and winning the bajillion-to-one shot so therefore the odds were completely and absolutely horeshit! They won after all! You can’t know how accurate something is from such a low sample size.
“We” can do no such thing. Trafalgar is mediocre historically (75% of the races were called correctly) and has zero transparency. The average pollster hits 79% correctness and even Rasumussen hit 78%.
Here is their polling methodology (and it’s so short I am including it in its entirety):
The Trafalgar Group delivers its polling questionnaires utilizing a mix of six different methods:
- Live callers
- Integrated voice response
- Text messages
- Emails
- Two other proprietary digital methods they don’t share publicly.
The company utilizes short questionnaires of nine questions or less based on their perceptions about attenuated attention spans and the need to “accommodate modern busy lifestyles.” According to Cahaly, the firm’s polls last one to two minutes and are designed to quickly get opinions from those who would not typically participate in political polls.
The firm has also pioneered methods to deal with what they describe as “Social Desirability Bias” in order to get at what a poll participant’s true feelings are in situations where they believe some individuals in a poll are not likely to reveal their actual preferences. In their view, this included the 2016 Presidential election and the 2018 Florida gubernatorial election.
No clue why they refer to themselves in the third person, nor did the poster named DMC learn anything about how they work by reading that.
This is how you lay out the methodology of transparent polling (I’m linking this as it’s far more substantial than the ad copy above):
One’s a pollster and one’s an ensemble of pollsters, but still, 538 is the answer.
If you think Nate Silver messed up, then you didn’t understand or process what he does.
He just forecasted a result assuming correct polls (blowout) and forecasted another result if the polls were wrong (not a blowout).
He didn’t perform the polls. He didn’t tell you they were infallible. He provided the possible forecasts based on available data. If you ignored the forecasts you didn’t like, that’s on you.
Polls used to be more trustworthy but the US electorate particularly those on the right tend to lie like Trump. He’s normalized sleazeball behaviour and there’s lots following suit. Hard to tell if polling will ever get back to some semblance of accuracy or if lying is just so normalized now that it will be useless information.
Yes, but if that one chance in 20 happens twice in a row that is one chance in 400 and that is a bit much. What it most likely means is that the events were not independent, i.e. that there is some systematic problem in polling. One possibility, rather likely IMHO, is that too many people don’t answer their phones unless they know the caller. This can skew telephone polling in unpredictable ways. Internet polling is easy but the sample won’t be random.
I understand Silver’s methodology. It’s based in part on his opinion on the worthiness of each pollster. Maybe he needs to give Trafalgar more weight. If he had his predictions would have been more accurate.
If someone keeps rolling boxcars at some point you need to question your assumptions about the dice.
Let me know when you find that link and I’ll retract anything he says that contradicts what I’ve stated about his methods.
I did. 3 of his 6 A+ pollsters trail behind Public Policy Polling in “Races Called Correctly”, but all 3 of them beat PPP in the other categories: “Simple Average Error”, “Advanced +/-” and “Predictive +/-”.
Those are pretty important methods of measurement. For example, if I put out nothing but surveys on California results for national elections and always said it would be 99-1 democrats win, I’d have 100% of my races called correctly. I’d suck hard at the other categories, which is why they exist. Conversely, if my survey was always within a single percentage point, but I focused on very tight races, I might have a horrid “Races Called Correctly” percentage, but would have exceptional scores in the other categories. This is why he uses all of those factors to determine a grade.