Ideology Affects Numeracy Skills (or Honesty)

Hat tip to Bryan Caplan, blogger for the Library of Economics and Liberty, for this story.

In a nutshell, even people good with number and analytical skills seem to either lose those skills, or shade the results, when faced with a conclusion that runs counter to their ideological positions.

Study respondents were asked to analyze a set of data and report the results – they were given a reasonably difficult problem that turned on their ability to draw valid causal inferences from empirical data.

All participants were given the same set of numbers. Two groups of subjects was told that the numbers represented the results of a skin-rash study, and told that the study showed sometime the treatment cured the skin rash, and sometimes it made it worse. They were asked to assess whether the treatment, overall, was curing more than it was worsening. As the study’s authors put it:

Of the skin rash group, half had the column headings for the numbers reversed – that is, half were given numbers that supported “rash better,” and half “rash worse.”

The other half of the study participants were given the exact same numbers, but this time they were told the numbers represented a city government that was trying to decide whether to pass a law banning private citizens from carrying concealed handguns in public. Government officials, subjects were told, were unsure whether the law will be more likely to decrease crime by reducing the number of people carrying weapons or increase crime by making it harder for law-abiding citizens to defend themselves from violent criminals. They were shown two sets of data purportedly from two similar cities, one with and one without a ban, and again divided into two groups, one of which had data that supported the idea that the ban reduced crime and one that did not – again not by changing the data, but merely by changing the headings on each column of data.

Basically there were four experimental conditions reflecting opposite experiment results for both the skin-rash version of the problem and the gun-ban version of the problem.

As expected, the study’s authors report, subjects who had previously rated highly in “numeracy,” which the study authors describe as “a measure of the ability and disposition to make use of quantitative information,” did better in the task than subjects who rated poorly in this skill. No surprises there. In other words, when the subject was skin rashes, they expected that the more skilled people in numeracy would get the right answer more often than the lesser-skilled people, and that’s exactly what happened.

As to the gun ban answers:

In other words, they predicted that when the subject was changed to the gun ban, the political outlook of the study participant would color his analysis.

And they were right.

The paper.

What’s fascinating to me is the low-skilled people are more likely to get the mathematically correct answer than the highly-skilled people when the issue runs against their beliefs.

I dunno about the ideological bias issue, but the problems with drawing conclusions from the presented data were obvious enough.

I’ve skimmed the paper and part of the intro caught my attention. The researchers seem to think that on topics where there is little to no cost to being (objectively) wrong people are more likely to be biased in their interpretation of data. Since an individual voting for or against a gun ban has makes almost no difference to the outcome, matching the views of one’s friends is more important.

I wonder if the instances of biased interpretation would decrease if the researchers could impose a cost on (objectively) incorrect interpretations. Maybe decreased cash payouts for wrong answers? Do it right and you could even put an estimate on the dollar value people put on agreeing with one’s peer group.

The question itself seems like comparing apples in oranges. It’s fairly simple to determine if a medicinal treatment is effective, because you only have one or two variables, maybe three. Making inferences from data that are multi-factorial - like crime - the comparison between medicinal creams and concealed carry/crime breaks down to uselessness.

But they were asked to report on what those numbers said, not about any additional factors that might come into play.

Seems like a finding in keeping with most research in this area. It reminds me of the study where conservatives thought a CFL bulb was a good bargain… right up until it had ‘eco-friendly’ printed on the packaging. The only thing different here is that they’re hitting the opposite end of the ideological spectrum, thereby showing that you don’t have to belong to the anti-science brigade to be bad with numbers when your ideology gets involved.

What’s the debate here?

Skin rash wouldn’t be a control if it killed a participant’s relative! Especially if skin rash used a gun! :slight_smile:

Results are not surprising. I feel I’ve heard of a similar study before (political challenges = lower performance in some way), but this isn’t my area so I can’t tell where I saw it or what the findings were.

I’m not sure I agree with conflating a single-axis political scale with attitudes on gun control. It should be easy to ask a broad political survey, possibly ignoring the non-gun related questions, but asking from other categories so that you don’t bias or reveal the experiment’s intent. And one of the author’s towns is full of gun-control Republicans, so that should be controlled for, even if it is an online survey.

What is “enlightened self-government”? Especially the “self” part. I skimmed, and aside from the title, variations on the term only seem to be mentioned twice.

I can possibly think of a rationale for that, but before that - it doesn’t say that in the parts you’ve quoted.

In fact it seems to say the exact opposite. That increasing numeracy has a (apparently positive) impact even when the answer goes against their beliefs, but just not to the same extent as it does when the answer conforms to their beliefs.

Page 23, first full paragraph:

Please note that Figure 7 is not “real” data. They essentially ran a computer simulation a bunch of times (e.g. 10,000x) using the data they collected, to see what the chances of getting the same result would be. A common technique to be sure, but not interpreted the same way.

That’s obvious, but one subject is nuanced while the other is straightforward. The first is simply the effectiveness of the rash cream, the second is the effectiveness of gun control on crime. I’m not familiar with this type of research but the experimental setup (seems to) lack a control, additional questions should’ve been asked of the study participants.

  • Honesty

We don’t really need to be that smart, Bricker. The facts are pretty clear for the most part. What requires intelligence is twisting those facts to accommodate an absurd position. That demands a professional.

Is it your position that the facts support one proposition or the other with respect to concealed carry laws affecting the crime rate?

I think it is an area of inquiry that is easily affected by subtle choices in wording and sampling.

For myself, I very much doubt that the “criminal mind set”, if you will, is much swayed by risk analysis. Liquor stores have been armed for years and desperate morons still try to rob liquor stores. Junkies can only think about one thing at a time, and its always the same thing.

Second, I think that the fear of violent crime at the hands of strangers is wildly exaggerated. This wild exaggeration suits the aims of some very well-funded people. As well, our corrupted culture simply seethes with it, about the only medium available where you don’t see a gun brandished every five minutes is porno.

Could be that the more “numerate” people know more critical questions to ask about data sets, don’t ask these questions about unimportant topics, and do ask these questions about important topics.

Is it perhaps the case that highly-skilled people are more likely to balk at being asked to overlay a “hard science” template (biology/epidemiology/mathematics) onto a “soft science” situation (political science/sociology/public policy)?

Or did they control for that?

That’s a good question – but the highly skilled people ran the gamut fr

Well. That was ominous.

At least they let him hit “submit” before they dragged him away.

Did either experiment include an option for “The data given are not sufficient to support any meaningful statement”? Because that would almost certainly be correct for the gun version, and may also be correct (depending on the numbers) for the skin cream version.

Wow. Very weird.

Did anyone see the second half of a post somewhere?

What I said was something like, “The highly skilled people ran the gamut from strongly liberal to strongly conservative, so if they did balk, it was in different directions according to their favored outcome.”

Of course you need consider the possibility that the researchers subconsciously fudged the results of their experiment to get the outcome they had predicted.

:smiley: