In any survey pertaining to topics that are sensitive or controversial, there are bound to be more people who support something unpopular in private than are willing to admit to it in public, or to a pollster. The “shy Trump voter” effect no doubt skewed many polls last year in the runup to the November election, although Hillary did still win the popular vote by 2%.
Seems like there are two potential ways for pollsters to address this skewing issue:
Make respondents feel more “comfortable” about admitting their support for unpopular or controversial things (this can be difficult or impossible);
or
Artificially add a certain amount of weight (i.e., 3%) to the “distasteful” side/candidate in order to compensate for the Bradley Effect. So if 40% of respondents are willing to admit to a pollster that they plan to vote for Trump, then the pollster needs to boost it up to 43% to get the real Trump vote figure.
What other solutions are there?
While this may have been a factor in some elections, I don’t believe it’s been consistently shown to be involved – national polling for 2016, IIRC, was about as good as it was for past elections, for example. Do you have any cites that this needs to be corrected for on a consistent basis?
The statistical method for handling this would be to have the respondent flip a coin (and not show the outcome to the interviewer). Then if the coin came up heads the respondent would give the socially undesirable answer regardless of their true opinion. If it came up tails they’d give their true response.
This way you could never distinguish whether an individual respondent truly held the socially undesirable opinion, but since coin flips are 50/50 you could remove the “heads” results on a statistical basis.
All that said, the problem with voting polls is understanding who will vote, rather than which way they’ll vote, and coin flips probably won’t help you there.
When I was in college, I was told that pollsters used to ask questions like, “Do you cheat on your taxes?,” by doing something like having the polled person roll a six-sided die, and tell the truth on a 1-4 but lie on a 5-6. You couldn’t “toss a coin” as the expected fraction of people who gave a particular response would be 1/2, but by using an event with a different probability, you could.
Whether you can use a 50% probability event depends on your specific instructions to the respondent.
For example, there are two ways of doing this: you could say “if the coin comes up heads then say you plan to vote for Trump, regardless of what you actually plan to do.” In that case so long as you know the probability of the uncertain event (the coin flip) you can adjust for it. In this case a 50/50 event is fine.
I think you’re describing a slightly different scenario. In your scenario the instructions would be “if the die lands 1-4 tell the interviewer the opposite of what you actually do, and if it comes up 5 or 6 tell the truth”. In that case I agree you’d need a non-50% probability event.
At least in US politics there are a hefty fraction of the electorate / polling targets who firmly believe that supporting the other side is abhorrent while their side is just fine.
So first off you’d need to know which side the respondent thinks is abhorrent. Then you can make your adjustments. But you have to ask them a question about their views to know which is the abhorrent choice. Catch-22.
With That Don Guy’s method (they ask you who you support, you roll a die and if it comes up 1-4 you tell them the opposite and otherwise you tell them the truth) you don’t need to know in advance which choice they find abhorrent.
The problem is the complexity, especially for Trump supporters (unwarranted!).
I don’t believe the Bradley effect is real anymore.
When Obama was running for President in 2008, a lot of pollsters thought his numbers might be artificially inflated by a Bradley effect. He scored better than the polls, especially in 2012. I think the real problem is we can’t tell who is actually going to turn out and vote, plus pollsters are always “behind” and can’t catch last-minute deciders.
I don’t think “shy Trump voters” surprised Clinton. Her edge in the popular vote was pretty close to the polls. She lost the electoral college because she underperformed in a few key states. Shy Trump voters in only a few states? Unlikely, IMO.
A redneck in a pickup truck with a Confederate flag and a rifle in the backwoods of Tennessee will probably have zero qualms about admitting his support for Trump.
A female programmer working for Google in the San Francisco Bay Area will probably be far more reluctant to admit to the same.
I think that a lot of trump voters lied about being trump voters not out of any “shyness”, but to “F” with the pollsters. (Then, as a bonus, they can complain about the polls being wrong.)
There are statistical methodologies that account for social desirability bias (e.g. the “List Experiment” method). They’ve been used to, e.g., assess the “real” level of politically incorrect racial views in the American south, and to assess the “real” level of support for Putin in Russia.
Nate Silver thinks the “shy trump” voter / social desirability bias effect doesn’t exist, for what it’s worth. Trump underperformed his expected vote share in areas where there were a lot of Clinton voters (and where you’d expect social desirability bias to be strong) and he overperformed in areas where most people already said they favoured trump.
Good point. As with Hillary’s pop vote margin, they each mostly overperformed where they were already strong.
And if anything, the mistake may have been more along the lines of believing that some of those who DID say they were for Trump would not dare and in the end would back off. Or maybe what we had in the three “flip” states was not Shy Trump Voters but Poseur Hillary Voters, who’d say they were going to vote for her but in the end did not really vote at all.
The problem I have with the shy voter theory is that it seems unpredictable and unreliable. Some times they show up, some times they don’t.
What do you base this on? I know a lot of people have floated it as a theory for why the polls showed a result they didn’t like, but I haven’t seen anything backed by substantive research that says that it happened enough to skew polls. The “Shy Tory Voter” effect from 1992 was confirmed after significant studies, though it only accounted for something like 2% of the 8% error in those polls. People attempted to blame the same effect for a more recent election, but the formal inquiry didn’t bear out the contention.
I don’t think it’s reasonable to say that it ‘no doubt’ skewed many polls, I think that’s an unproven assertion that is appealing to certain people but not grounded in fact. There are numerous practical and mathematical methods to deal with such an effect if it is shown to be there, but I don’t see that there’s a proven effect for pollsters to account for.