I don’t, and here are my reasons for thinking this.
I’m from Canada and my last Provincial election and the last federal election had results that defied what the polls said “should” happen. Traditionally polls have been fairly accurate, but I think that is a thing of the past.
With my examples I thought how could they get both election predictions so wrong? Then I remembered how they do the majority of their polls: they call people and talk to them. Who doesn’t talk to people on the phone that they don’t know (or really talk on the phone)? Pretty much anyone under the age of 35!
So the majority of people they were polled were Gen-X or older people which would totally skew the results. There was probably only a small fraction of pollsters that were of the millennial age.
My guess is in the upcoming US election Hillary will win by a landslide even if the polls in the last few days only have her winning by a few points. My gut feel is the millennial vote won’t want to elect Trump and will turn the tide.
This was an issue as well in 2012, but the polls still managed to do pretty well. Apparently, the measures they’re taking to counter this effect are working.
Your observation has not escaped the professional pollsters. I imagine that they have done what they can to compensate for the lack of phone lines. They kind of learned their lesson back in the Dewey/Truman days. Still, I admit I’m a bit fuzzy on the exact details of how they do this.
I think Obama slightly out-performed the polls on the last election. Certainly, the polls were wrong with Sanders in Michigan and part of that involved a heavy turnout in the younger demographic. However that same demographic may not turn out heavily in this election as they seem to be disillusioned with both candidates. So…I dunno.
Over the last week of the 2013 British Columbia campaign, 8 polls were done. Each of them sampled fewer than 900 people, except in one case (1147 sample).
Six of these polls were online, while the other two were Interactive voice response (IVR). No polls were done by personal calls.
In addition, in BC, we vote for our individual MLA’s ; The party with the most MLA’s forms government. No polls were done in individual ridings; polls simply sampled across the entire province.
In summary; In the case of the BC 2013 election, the polling techniques were not good enough, and they got the end result wrong.
Some speculated that the very polls that showed the NDP winning handily actually caused the NDP voters to stay home because “Why bother, we’ve won already”.
I don’t think the polls accurately reflect the very unbalanced ground game of the current election so might be underestimating Hillary by 1 or 2 points in battleground states.
Slightly agreeing with the OP. Any individual poll has it’s flaws. A handful of individual polls have slightly fewer flaws overall.
Polling aggregators try to cast as wide a net a possible, and adjust for historical inaccuracies (e.g. Poll X is always off by .4% in that direction, etc.). Then they run heavy Bayesian analysis against the aggregated polls to come up with an overall likelihood. For election polls, the closer you get the election date, the better the likelihood of the results. So, yes, I think properly aggregated election polls are accurate.
Plus, the federal polls in the 2015 Canadian election weren’t wrong. Yes, at the beginning of the election, they showed the NDP in majority territory and the Grits still in third party status, with the Tories in the middle.
And on election day, the results were quite different: Grits with a majority , Tories second, and NDP back to third party.
But the polls weren’t wrong. The polls taken during the election showed that the NDP steadily lost support, and the Grits gained. The final polls taken a few days before the election accurately predicted the outcome, as the wiki article on the 2015 election polls shows:
Clearly, the Canadian electorate was not heavily polarised and what happened during the election influenced the outcome: the NDP lost support, the Grits gained, and the Tories held steady. The election campaign was one of the longest in Canada’s history and people’s views shifted during it.
Pollsters will caution that a single poll, taken alone, is simply a snapshot of where the electorate is at on that day. A single poll is not predictive, nor is a handful of polls all taken at the same time, two months before the election.
What is far more significant is the trend in the polls. The trend in the Canadian polls in 2015 showed a volatile electorate.
That doesn’t mean that the polls were inaccurate, nor that polls generally are inaccurate. And in particular, the Canadian example of a volatile electorate in a parliamentary system is not an indicator that polls taken in the US are inaccurate.
The US presidential election is a winner-take-all, and the polls have shown that the US electorate over the past two decades have become increasingly polarised and stable in their views.
The Canadian example has little relevance to assessing the validity of the polls and the trends in the US system.
One interesting bit about Brexit (another voting issue that many thought that it showed that polls are not accurate) was that the live polls got it a bit wrong, while the on-line polls got it closer to the end result, with leave winning.
Currently most on-line polls and live phone polls are showing Clinton ahead, with the on-line ones AFAIK showing Clinton just a bit higher than the live poll ones, although one has to point at the latest live poll from Fairleigh Dickinson/SSRS (9/28-10/2) that shows Clinton 50%, Trump 40%.
The polls might have been wrong in Michigan, but that doesn’t mean that the polls were wrong. If the polls say that Candidate A has a 90% chance of winning, but then B wins, does that mean that the polls were wrong? Not necessarily: A 10% chance is not impossible. And now, poll ten different states, and if all of them had a 90% result, well, you’d expect about one of them to go the other way.
In some states, the candidate who the polls said had the lower chance ended up winning. Overall, this happened about as often as you would expect. In other words, the polls overall were right.
After the 2012 election, Nate Silver actually said as much in one of the interviews: he got more correct hits than his model predicted, and that that was just as bad (from a statistician standpoint) as being wrong too often.
I think people routinely forget about the margins of error when they look at poll results and therefore much of the “wrongness” is actually created by end users who are misinterpreting what they see.
If Clinton is at 48% and Trump is at 44%, with +/-4 error margins, that means Clinton 52% and Trump 40% is one of the predicted results of the poll, as is Clinton 44% and Trump 48%. But most people would point to those examples (especially the second one) as evidence that the poll was wrong. That’s not a fair criticism, because the poll told you these results were within the range of its measurements.
Of course some polling gets it wrong at a more fundamental level, but I think they’re closer to being right than the public usually gives them credit for.
The Democratic margin and predicted turnout among under-35s this year are smaller and lower, respectively. That means any polling error that undersamples this group is likely to be less important in 2008 than in 2012, two elections in which the polls were quite accurate.
That said, there’s good reason to think that voter response problems aren’t skewing the polls. They are just making good polling more expensive. That’s because good polling doesn’t give up after calling a number and not reaching someone. They will call back 80 times. And if they don’t reach you, they’ll make sure the next person they call in that slot is similar to the first one demographically. That’s expensive, but it ameliorates the problem of response rates somewhat.
The other big ameliorating factor is that different polls use different mechanisms to weight the sample. So if they don’t reach enough young people, they give more weight to the information they got from the young people they reached. And they determine how much weight to give it in many different ways that are unlikely to all be correlated. Some use census data. Some use voter registration data. Some use more obscure datasets. And they all weight different demographic variables. Some use race. Some use geographic location in the state. Etc. Etc. (Not to mention polling methods that don’t call phones at all.)
It also doesn’t help, for purposes of public comprehension, that there are two very different numbers that are both usually expressed as percentages: There’s the share of vote a particular candidate is expected to receive, and then there’s the probability that that candidate will win. They (usually) coincide at 50%, but not at any other number: A candidate who’s expected to get 55% of the vote, for instance, probably has around a 70% or 80% chance to win (depending on exactly how large the margin of error is.
But now, a lot of places are reporting those probabilities directly. If Nate Silver or his ilk says something like “Clinton has a 70% chance to win North Tacoma”, there’s a tendency for people to incorrectly read that as “Clinton will win North Tacoma 70-30”. Which would be a massively huge landslide, and makes her victory seem like a sure thing. If it happens that she doesn’t win, well, that’s not at all remarkable: There was a 1 in 3 chance of it happening. But it doesn’t mesh with the people who thought she was a sure thing based on the 70% number.
First, the polls are not in the business of predicting the final margin of the election. Their purpose is to gauge the range of opinion as of the day they are taken. Everybody in the business is well aware that third party voting will fall off because more people are willing to say they will vote for a third party candidate than will actually do so, e.g. That doesn’t mean today’s polls aren’t accurate; it does mean that advance polls will not accurately predict the exact numbers.
Second, there is still a significant percentage of people who have not made up their minds. You may wonder how this can be, but decades of research have shown that these people are real. Many will not finally decide until they walk into their polling place. Obviously, this percentage of voters cannot be predicted by advance polls.
Third, the younger the voter the less likely they are to vote. Clinton cannot expect nor put hopes on younger voters to bring her victory. But it’s also absurd to think that pollsters aren’t aware that some groups are less likely to answer polls and so not make extra efforts to seek them out. Noncompliance is the hugest issue in polling today. Every pollster is working hard to get proper samples. How successful they are we can’t tell from the outside, but they’re way ahead of you on this.
It’s possible that Clinton’s margin will widen from today. In fact, I’d expect it will for a variety of reasons. But that says next to nothing about the accuracy of polls today. Just the opposite. The polls are doing what they’re supposed to, not what you want to to.
Mostly, they’re fairly accurate, for better or for worse.
People who are trailing in polls, obviously, WANT to believe the pollsters are biased, or that they’re undercounting their hidden supporters, or missing how ENTHUSIASTIC their supporters are…
But most of the time, the polls are essentially correct.
Lou Harris was the only major pollster whose surveys were unforgivably biased, and filled with leading questions designed to get a liberal result.