Question About Survey Taking/Analysis

I hope there are one or two survey experts on the board…

Has anyone ever tried to quantify the effect of survey respondents who answer a survey in a particular way simply because that is the way they are “supposed” answer a question? In other words, even though the survey is completely anonymous, is there a percentage of respondents who will answer morally/ethically difficult questions in a particular way because to answer differently will demonstrate that they are racist/misogynist/some other “ist?”

The details of the survey I am looking at aren’t particularly important, but I can fill them in if it’s necessary. In this survey, there are already 30% of the respondents answering the way I want them to (to support my argument), but I wonder if that percentage might actually be higher if people weren’t self censoring.

Yes, that is taken into account. That I can tell you for sure. For the life of me, I can’t think of the specific name for that tendency. Ugh, it’s going to drive me crazy.

ETA: Wikipedia just lists what you describe in your post as a “disadvantage” of survey-based research. I would say it’s similar in spirit to people who don’t take the survey seriously, or just answer completely at random. For the latter, there are validation questions that try to ensure that people are at least reading the questions, e.g., a question that says “Answer D to this question.” There is also the matter of paying people to complete surveys, with the hopes that if they are receiving a reward they will take it seriously.

Cool. I’m glad I’m not completely off base. I’m pretty sure that the tendency is not factored into these results, or at least the study makes no mention of it. I have a sneaking suspicion that the survey takers are advocating the position opposite to mine, so they were probably happy with their 70% agreement, and were not looking for factors that might have lowered that result.

Here are 2 examples to explain why people might answer surveys the way they think they should answer

The main one is social desirability bias

Social-desirability bias - Wikipedia

Bradley effect is a specific example for black candidates running for office

Bradley effect - Wikipedia

Social Desirability Bias it is! Thank you! I am certain that there was not a social desirability scale applied to this study. Since I’m sure you are curious, the study I’m looking at is examining the attitudes of maternity patients and labor and delivery nurses towards nurses who happen to be male. My thought is that, especially with the nurses, there is a social pressure for the nurse to report that she would have no problem working with a male labor and delivery nurse, or that she sees no problem with a male nurse working in L&D.

I know of a survey where they asked high school kids what math classes they took. In some cases the kids would say they took higher level math classes without taking lower level classes. One way around that was to ask the class questions in random order using software.

You could also ask whether the participants think that
1.) Other women who are (maternity patients and labor and delivery nurses, whichever the participant is) would have a problem with males in that role.
or, especially in the case of nurses:
2.) Whether the (female labor and delivery nurse or maternity patient, whichever the participant is) that the participant knows best would have a problem with males in that role.

Let the respondent shift the blame then. Interesting. I’m not going to replicate the study or anything. I just want to be better able to understand this study. There are other interesting things I need to look at. For example, there have been court cases finding that femaleness is a “bona fide occupational qualification” for being an obstetrical nurse (in other words, it is acceptable to discriminate against male nurses on the basis of their sex in order to protect the privacy of the female patients).

I used to administer surveys and one way to do this is use control questions. We’d have about 50 questions and about half were useless, and 20 were control questions and the other five were the actual answers.

The most famous example of this is, when asked if you are in favour of welfare, most people will respond “no.” But if you rephrase it as “Do you think the government should help people who are unable to help themselves?” The answer usually changes dramatically.

So what you do when you make the survey is put control questions to weed out the lies. It’s not perfect, but it can give you a fairly good idea of who’s at least trying to bluff through it.

In voting, there is the ‘Tom Bradley effect’, where white voters afraid of being considered racist will tell a polltaker that they are planning to vote for the black candidate, but in the privacy of the voting booth they actually vote for the white candidate.

Based on the 1982 election for Mayor of Los Angeles, where Bradley had a strong lead in the polls, even in election-day exit polls, but actually lost the election. Post election analysis showed that a smaller percentage of whites had actually voted for him than they had said they were going to vote for him.

There were several elections with a white candidate vs. a black candidate, with similar results: the black candidate got significantly less white votes than polls showed. That was during the next 10-15 years after that. More recently, from about the mid 1990’s, some evidence seems to show that this effect is dying out a bit; though that is still unclear. As racial tensions ease, it may be less of a factor. But a similar factor may be seen in votes on same-sex marriage: people who have told pollsters that they will vote for marriage equality actually vote against it when in the voting booth.

OP - are you familar with ‘derived’ analyses (rather than ‘stated’) ?