I need to make it my life’s work to get people in the media to just ask a few simple questions of every social science research finding: is the causation arrow correctly applied? Do you have proof of that? Do you understand what post hoc and cum hoc fallacies are?
This seems to pop up especially often in educational research. The one I fulminated about recently was the correlation between suspending a teenager from school and their later getting in trouble with the law as an adult. The presentation on NPR did not even raise the question of whether the thing that came before (suspending the teenager from school) actually caused the thing that came later (criminal record as an adult), or whether maybe, just maybe, this is what you would expect to happen over the teenage and twentysomething years of someone whose personality and life circumstances are oriented towards criminal behaviour.
Then there was the especially silly one mentioned in a thread here recently: that when a household has more books in it, the children living in it do better in school. Never mind that bookish parents are likely to have, you know, books as well as being likely to have bookish children; there seem to be some actual serious people out there who imagine that delivering a big pile of books to every struggling student’s home will turn them around.
The latest one is presented on Slate’s parenting podcast: the “surprising” finding that parents who get more involved with their children’s homework, and set rules about it, have children who do worse rather than better in school. This was presented 100% as it being the parents’ fault, for interfering too much with homework, that the child’s grades suffered. ::headdesk::
Or gosh, could it be that certain kids, who are having more trouble in school generally, thereby spur their parents to get more involved in making sure their homework is done? Ya think?
What’s interesting, though, is that correlation – specifically, one event following another – without any other influence at all – is the definition of causation.
If you perform two experiments, and they are identical in all respects save for one controlled key difference, and in one, the experiment succeeds while in the other it fails, that one key difference is defined as the cause of the success or failure.
(The complication, of course, in real life, is that it is remarkably difficult to say that we have eliminated all other factors. The rooster always crows just about the time the mailman arrives. Once, the rooster didn’t crow, and the mailman was late. Aha! Causation!)
This is why people take core curriculum in college. Hopefully that curriculum includes some science courses that go over experimentation, operational definitions, and extraneous factors. After a few examples of bad science, some students may get it. Without examples it’s a terribly difficult thing to get into someone’s head.
You know what’s interesting about those two words? The apostrophe is used for possessives like “John’s (car)”. If you are talking about something that doesn’t have gender, a car maybe, you can say “Its wheels”, but not “It’s wheels.” And what’s up with words like “won’t”? What is that supposed to be a contraction of? Will and not of course, but why isn’t is willn’t? Or, maybe it was but we changed it.
The thing is, I know there are lots of things the average person is not too clear on. But I expect more from the media. Most people don’t know where most of the world’s countries are, who their leaders are, and so forth. But when you read or hear a news story about a given country, most of the time the reporter will know that stuff or the editor or fact checker will straighten it out. Yet when it comes to reporting on these social science studies, it’s as if we were getting news reports that put Ukraine in South America or described Putin as president of France.
The academics who publish and promote these findings deserve to have their knuckles rapped, too. Where’s the peer review? Why is the scientific method failing in this area?
And in the case of this “don’t help your kids with their homework” deal, it has the potential of doing a lot of harm.
(BTW, I agree with you about the illogic of “its”, but that is a serious detour so that’s all I will say about that.)
A possible link doesn’t necessarily mean a cause, but I think our human brains are wired to see it that way. It’s why we’re superstitious and believe in luck and gods. It would be nice if the news reported science stories better and tried to actually educate the viewers, but that doesn’t get ratings. If it’s not attention-grabbing, it’s not news. And that sucks. There are some blogs that get it right and take the time to break down newly published studies, but they’re few and they’re not nearly as big as the giant news outlets.
My pet peeve is when the article just says “Researchers at Harvard” or something, and doesn’t give us any way to look up the actual paper they’re talking about. I often want to read it to make my own assessment (what if it was a tiny sample size, etc), but it’s sometimes very hard to track down. That’s why I follow a lot of scientists on Twitter.
So what do we do about “experts” who are, with the aid of credulous media outlets, pressuring school administrators to change rules such that students no longer get suspended, no matter what they do (since this will magically end crime)? Or those advising parents to stop helping their kids with homework so they will succeed in school?
Is this a relatively temporary blind spot in our society that will necessarily have to sort itself out when these kinds of efforts make a mess of everything? Or are we doomed to keep blundering about like this forever?
Have you read these studies? Are “these findings” from the studies being accurately reported? Because if experience tells me anything, it’s that the media in general do a spectacularly shitty job of judging the quality of research, of accurately caveating any conclusions, and particularly of describing controls.
So did the researchers of this study control for this effect? Do you know?
I should note that I don’t know for a fact that the causation is not happening in the way that is being described here. But it does not remotely seem the most likely explanation; and I certainly don’t see any actual evidence that it should be taken this way.
My wife has a master’s in sociology, and my mother a Ph.D. in that field, so it pains me to say it but it often seems like sociologists are particularly guilty of this offense. Part of the problem may be that most sociology departments will proudly (one might even say “defiantly”) declare that they are not just a science in the sense of making observations about the social world, but are always actively engaged in trying to make society better. That’s admirable, but I wonder if it gets in the way of being careful about the scientific method.
Here’s an excerpt from the school-suspension story:
Note how the NPR reporter does not challenge Losen’s narrative. Later in the story the “other side” is represented by a single teacher who defends her right to use suspensions to protect the other kids in the class (as she should), but does not offer any kind of rebuttal as far as the logical fallacies involved in citing these stats the way Losen does.
Once again, it is in the realm of possibility that suspension is causing some degree of harm to the suspended kids (though that doesn’t even address how much it helps the kids who are left behind in the classroom, in peace). But I can’t see how Ockham’s razor would lead us, absent proof one way or the other, anywhere else but “dropouts and criminals are more likely to be the type of people who get suspended” as much more likely than “suspending kids, apparently for no good reason, turns them into dropouts and criminals at an astonishingly high rate”.
Once again, in my experience, the media in general do a spectacularly shitty job of judging the quality of research, of accurately caveating any conclusions, and particularly of describing controls. The research you’re describing is published in a 300-page book. In this case, I really don’t think any 1200-word article is going to adequately describe which factors were accounted for in the study, and in what way. Concluding that it “doesn’t look like” the researchers adequately controlled for previous academic performance on the basis of seeing no mention of it in one *Atlantic *article isn’t very convincing.
Aside: lest you think I’m automatically assuming the research is correct; I’m not. I’m saying you can’t make that conclusion (and neither can I) on the basis of what you see in a popular press summary article. In my anecdotal experience, what appears to be a poorly designed study or off-kilter factoid quoted in the press turns out to have been misquoted or aver-simplified, and the actual research is fairly solid. That’s not always true (glance at this thread for an off-kilter factoid that turns out to actually be poor research), but enough so that my first guess is always that the press article is poorly written.
But I think it’s clear that this research is retrospective, right? Retrospective research can provide solid information in some areas, nutrition being a good example. There’s not much of a chicken and egg issue there: it seems exceedingly unlikely that good health will somehow cause a person to eat more spinach rather than vice versa. With this homework stuff, though, it seems far more plausible that the academic struggles precipitated the greater parental involvement with homework oversight rather than vice versa.
And I just don’t see how in this area you could do any kind of regression analysis or control for confounders that would eliminate the problem I’m pointing to. I am however not myself a social scientist, even though I come from a family of them. And I know that if anyone is clever enough to see a way to ferret out causality in this kind of a retrospective meta-study, it’s likely to be someone here. So please enlighten me if you can think of a way.
It seems to me that a lot of this kind of reaction is from those who don’t understand research at all, and are sure that the pointy headed academics don’t have enough common sense to understand that correlation is not causation, unlike the good old Americans in the media. This concept never came up as a professor rams how to do research into the heads of the PhD candidates or from reviewers of their papers.
BTW, during the '60s when the health effects of smoking were still controversial, correlation does not imply causation was used all the time by the tobacco companies. Sometimes the mechanism might be hard to find - doesn’t mean it isn’t there.
Now that I have a chance, let me comment on this one specifically. The other thread this was mentioned in linked to this article in Science Daily entitled “Books in home as important as parents’ education in determining children’s education level.” The article goes on to discuss how “parents who have books in the home increase the level of education their children will attain.” It even says things like, “…having 500 or more books in the home propels children 6.6 years further in their education,” explicitly crediting the books themselves for children’s achievements. From that article, one might indeed conclude that, since apparently simply being around books is so important, there are some “actual serious people out there who imagine that delivering a big pile of books to every struggling student’s home will turn them around.”
Except, of course, the research says nothing of the sort. That article is based on this paper, “Family scholarly culture and educational success: Books and schooling in 27 nations.” In it, the authors attempt to assess the importance of scholarly culture - being bookish, in other words - on educational achievement. “Scholarly culture” being ill-defined, somewhat entangled with educational history of the parents, and difficult to measure directly, the authors settled on surveying the number of books in the household as a reasonable and countable proxy for “scholarly culture.” So, far from blindly ignoring the somewhat obvious correlation between “bookishness” and “owning books,” they’re actually doing the exact opposite: they’re depending on the correlation to make the point of their paper.
So the logical string goes like: owning books is correlated with promoting a “scholarly culture” in the home, which causes increased educational achievement. One could perhaps quibble with that logic, but it’s not at all ridiculous, and the authors give a well-reasoned and well-cited background argument (at least it looks like that to me as a layperson). Moreover, it leads to a useful conclusion that might not have been apparent without doing the study: parents who are undereducated and poor (which are normally indicators of lower educational attainment by their children) can still hope to stimulate their child educationally through fostering a family scholarly culture. That takes more than just dumping a pile of books into the home, but it’s something that, with willing parents seems obtainable.