What do you call it when survey respondents skew their responses due to (dis)liking the surveyor?

Let’s say that Jon visits a museum/national park/zoo/attraction and is led through the experience by a guide, Sarah. Sarah is very friendly and funny and personable, but doesn’t know her subject matter very well and actually shares a lot of incorrect information with the visitors.

At the end of the tour, the administration asks Jon and the other visitors to fill out a survey asking how much they learned and seeing how their values changed (“After the tour, are you more interested in art history/nature conservation/cave preservation?”) They all answer quite positively, even though subsequent questions would reveal that they learned almost nothing from their tour. They just answered that way because they really liked Sarah and wanted to please her.

In other words, the likability of the surveying party influenced the accuracy of the actual survey responses.

Is there a name for this type of bias? And bonus question: How might one design a survey to counteract it?

I call it a good time to look up SurveyMonkey

I don’t understand how that helps. SurveyMonkey doesn’t design good surveys for you, it just gives you a software platform to host your questions and collect responses.

(This isn’t a real survey anyway; I was just curious about the effect.)

I get what you are asking. The opposite would be a surly and rather pedantic guide who really knew his stuff but failed to engage the interest of the group.

If the survey is giving false data then the survey is badly compiled. I see many such.

I can’t think of a good term, but the general phenomenon is discussed among social-science geeks as part of design issues.

A perfect (therefore impossible) survey design would be one in which the survey itself did not have any effect upon the respondents, and therefore where the respondents’ reaction to the survey itself had no bearing on who did and who did not complete the survey or how they answered the questions.

A bad research design is usually described as one in which the tiny handful of imbeciles willing to keep answering your overly long and badly worded array of questions is not representative of the population you were trying to measure.

But the things you’re describing — where the respondents DO finish the survey but skew their answers to what they think the surveyor “hopes” for if they like the surveyor (or the survey itself) or where they do the opposite, hating the surveyor (or the survey itself) and therefore take delight in providing answers they think the surveyor will be upset by —are research design flaws also.

There are a lot of real-world versions of it, but they usually have more to do with how the question is worded than with the behavior of the surveyor. “Do you thing the government should confiscate the firearms of law-abiding American citizens in order to implement their policy of imposing gun control?” may generate a very different result than “In your opinion should individual residents of our communities be prohibited from possessing deadly ballistic weapons of the sort that can be used in violent crimes?”. But I recall sidebar paragraphs in my textbooks about surveys and respondent behavior varying based on how the surveyors dressed and various aspects of their behavior, which is more what you’re asking about.

Right, exactly. Let’s call the surly guide Bob.

Let’s say the mission of the attraction was to educate visitors on a Very Important Topic. Then maybe Bob would be the better choice even if his survey responses indicated popular dissatisfaction with him? (i.e. people didn’t like Bob very much and rated their experience poorly across the board, but objective questions of knowledge indicated they actually learned far more from Bob than from Sarah).

Conversely, if simply entertaining visitors was their mission, Sarah might be the better choice.

Yeah, exactly.

If you had enough respondents and enough different surveyors (Bob, Sarah, Jane, Ron, Maya), I guess you could statistically filter those differences out… but aside from that, is there anything you can do during question design or sample selection to help with this phenomenon?

Maybe something like “How much did you personally like your tour guide?” followed by “Regardless of how you felt about your guide, how much would you say you learned from your tour?”. At least that would get you a correlation between likability and other answers?

I’m guessing high school and college teachers could give you an earful on the subject.

I actually studied this before I drank my way out of college. So although I know something on the subject, I am not an expert. Large varied sample sizes, a variety of surveyors, careful wording all help.

Another thing that I once used in college when we were doing a training survey was dummy questions. We had a question that no reasonable person was liable to answer “no” to. And yet some people did. In fact everyone who answered “no” to that question answered “no” to every question.

In a well designed survey you can learn things even if the numbers are slightly skewed.

Since no one seems to have directly answered the question yet:

It’s a subset of the “demand effect” https://en.wikipedia.org/wiki/Demand_characteristics.

The demand effect is a bit more general, covering all the categories where the subject of an experiment (in this case survey) tries to guess what the experimenter wants, and acts accordingly. It doesn’t have to be to please the experimenter, it can also be to look good, to stuff up the experiment, or to avoid looking stupid.

I don’t know how the bias could be revealed. The museum may find that Sarah gets consistently better reviews than Bob, but unless the surveys include some kind of knowledge quiz there’s no way to determine that one is better at imparting the knowledge than the other.

One would hope that periodically the supervisors take the tour to see what is actually going on. But yeah, some question regarding actual knowledge “Is X true?” “Could you answer that question before you took the tour?” Should be on the survey.

A couple of ways to deal with the demand effect might be to offer some very small token in response, not rewarding enough to stimulate the desire to please, and skew the results, but enough not to make the survey overly burdensome, like a 5% off coupon for the gift shop, or a free soda from the cafeteria. Also, make extra sure the participants know the survey is voluntary.

When the proctors ask people to take it, word the request in such a way, that it includes the word “learning,” or “education.” “The survey is to gauge the educational value of the guided tours.”

Make sure the survey area has no visual access to the guides. Make sure the guides are never referred to by name. Have the proctor discourage people from having discussions and comparing their experiences while they take the survey.

OK, so that wasn’t what was asked. I had to design a survey once for my not-for-profit I worked for in the 90s, and it’s really tough to be unbiased. They picked two people in my position, me, with my English degree, and the other person in the same slot who had a psychology degree. It was one of those jobs where you had to have a BA, but they didn’t much care what it was in. Whenever there was a writing for the public task, I got it.

There’s nothing wrong with the results from the survey in the OP. A survey reflects how people respond to the survey. Reading any more meaning into a simplistic survey like that is a mistake made by the museum.

But they want some meaning in it, or why bother? Does it mean anything that people like Sarah? It may mean that people who come to the museum are pretty ignorant. Or it may mean that a pretty girl with a good personality can say that cave men hunted triceratops, and no one cares. The museum probably wants to know.

They bother because there is a widespread culture of collecting useless information, particularly among non-profits. I know this, having worked in non-profits almost my entire life.

What Boyo said. If they want to know if “it may mean that a pretty girl with a good personality can say that cave men hunted triceratops, and no one cares”, then they’re at least starting down the path to find that out.

Surveys tell you how people answer surveys. To make use of a survey you need to know and understand that.

It is a type of response bias, sounds most like desirability bias.