This whole idea that only reading a research paper in its entirety (with perhaps the added requirement of being an expert in the field) to have any opinion on it…it’s just unnecessarily restrictive IMO.
And in this case, we’ve got the lead author of the paper offering his “elevator pitch” as to what it is all about. No sense at all that he has accounted for the possibility I raise. Instead, he just speculates that “Ethnic diversity is an indication of ideas’ diversity”. Um, okay. Ironic, I’d say–given that he seems not to himself considered the “diverse” idea that I raised upthread, though by the method he uses I wouldn’t map to a different ethnicity than his
A lot of harrumphing in this thread about leaving it to the professional researchers, who are assumed to have accounted for various confounders and to generally make valid conclusions from their data. My critique was about social science research relying on retrospective analysis of data, but this NPR story raises even more eyebrows since it is occurring in a field where the researchers use intervention groups and controls:
There is every reason to believe this same dynamic holds sway in the social sciences (with the desire there being to solve or ameliorate social problems as opposed to medical health). And in the social sciences you can also add in a tendency to reflexively discount “nature” vs. “nurture”, and to be severely circumscribed in their analysis any time gender, race, sexuality, or socioeconomic status is involved.
What these academic social science journals need is to hire more people who will look over the findings they are about to publish with a jaundiced eye, free of any desire to move the field forward with hopefulness, and instead oriented toward challenging the conclusions with “what if the explanation is Y rather than X?”. In terms of the medical studies, maybe they just shouldn’t publish anything until it has been replicated. I mean, 90 percent?!? That is scandalous, or should be.
I don’t think there was much “harrumphing in this thread about leaving it to the professional researchers,” and there certainly wasn’t any at all from me. Most of the harrumphing was about your apparent conclusion that if an effect is not mentioned in a popular press summary article, then it must not have been considered in the original research. It’s not that there isn’t poor research or unfounded conclusions out there, because there certainly are. However, once again, the media in general do a spectacularly shitty job of judging the quality of research, caveating any conclusions, and describing controls. If you find something lacking in the research described in a press article, the first thing that should be suspect is the article itself.
Gosh, you don’t say. Kids who experiment with smoking one thing experiment with smoking another thing? Shocker! That must mean one causes the other! Next they’ll breathlessly report that people who report having tried Funyuns are more likely a year later to regularly eat potato chips, making the Funyuns a clear gateway food. :rolleyes:
To the WSJ’s credit, there is one little caveat in the fourth paragraph: “Researchers didn’t determine if using e-cigarettes led teens to try smoking.” But there is no followup on this point, and subsequently we get this paragraph:
Not if, but why. :smack: Note that this is a statement right from the researcher, so we can’t blame it on an incorrect interpretation by the reporter.
It reminds me of the studies done years ago showing that teenagers who tried marijuana were more likely to subsequently do cocaine. Apparently a lot of these people have not heard of the post hoc fallacy, and don’t think about how weird it would be for someone to use a hard drug without ever first having tried a softer one if it exists–which doesn’t mean using the weaker one *caused *the use of the stronger one (if someone invented another plausible drug that was milder than cocaine, cocaine users would probably have tried that as well; but if marijuana didn’t exist, they still probably would have ended up using cocaine).
The main problem with these “studies” is the huge number of variables, and their effect upon the outcome. In engineering, this is known as “the hunt for the red X”. If you try to test every variable, your experiments become impossibly large.
That is why sociologists can “prove” anything. In addition, the data is often suspect-take new drugs-I was surprised to find that placebos (administered as part of effectiveness trials) actually work quite well-even though we know they can have no effect.
First, this is a useful study because I’d bet e-cig makers might claim that the use of their products has nothing to do with later smoking.
Second, I can guess some causation - such as a kid learning the smoking protocol with e-cigs, or getting feedback that he or she looks cool with e-cigs, that might lead to real cigarettes. Not saying that these happen, just that it is possible.
I bet anything the writers of the article never read the journal article and only the press release, and probably spoke to the author on the phone. I’ve seen press releases based on highly technical products we were selling, and they were scary.
“Product X is based on algorithms.” I’m not making that up.
Evaluating research based on WSJ or even NY Times articles is pointless.
Did they have a quote from an unaffiliated researcher about the research? The Times is doing that, and it is very helpful.
This is a great example of the “hah hah, them pointy head scientists are so dumb” meme I mentioned last year. I don’t know about sociology but my daughter has a PhD in psychology and is very mathematical. If you think a PhD student could get through or a paper could get published without considering this you don’t know anything about this kind of science.
It sometimes happens that an interesting effect shows up in the data, an unexpected one. You don’t publish it - you design a new experiment with that as the hypothesis and see of the effect is still there. Most of the time it isn’t. Bummer, but expected.
Which is still entirely possible IMO. (I don’t smoke e-cigs or regular ones, BTW, so no dog in this fight.)
Sure, it’s possible, and you can “guess” anything. But to go forward with your guess as your conclusion is not responsible. These kinds of studies impact public policy (and are most likely intended to). They aren’t just grist for scientific inquiry within academia.
P.S. I can “guess” that those kids might have just started smoking regular cigarettes earlier if there weren’t e-cigs to try. In which case the e-cigs actually saved some damage to their lungs, as well as the lungs and hearts of those around them.
That’s a totally different hypothesis, and would require a different study.
What I’m saying, though, is that if I can come up with potential avenues of causation in five seconds, the authors of the study could also, and could test for these. Condemning the study because it’s obvious the effect is just correlation is condemning both the scientists, the reviewers and the editors of the journal for being idiots with no evidence. Condemning the author of the article is a different matter, but as I said a year ago when journalists who have to cover papers from a wide range of science in about a day get anything right it is a miracle. That they get it mostly right so often is pretty good.
It’s not just science. When my father worked at the UN he said that anything newsweeklies reported about the UN was bogus.
Are you saying the researchers actually did find a causation arrow, but the WSJ said otherwise and the lead researcher made statements implying otherwise? I find this highly dubious. In retrospective studies, it’s pretty hard (if not impossible) to show causation. You need a prospective trial for that, which is tough to do given the ethics implications. The only one I could imagine that would pass muster with a human subjects review board would be to do some kind of intervention designed to dissuade kids from trying e-cigs, yet without affecting their thoughts on whether they ought to try regular cigarettes. And I’m not sure that’s even possible.
ETA: You are also ignoring the fact that these kinds of researchers are not just off in their corner of academia doing pure research, followed by reporters nosing around and misreporting the facts. The researchers have a public policy agenda and are promoting their research in the media in service of that agenda. They are working almost as a team, really.
Since I haven’t read the papers, I’m not saying anything. But very often skeptics of this kind of study say “hah, the dumb researchers never considered it might be just correlation” as if this brilliant insight was way beyond the capacity of people with PhDs and expertise in the field.
We were in a team with the people who wrote press releases about our product, and they were still miserable. The researchers might be pushing an agenda, but the university might be trying to get the research in the news - that is what university press offices do (when they are not talking about sports and defending the latest tuition hike, that is.) I’d have to see the press release also. The reason scientists are usually awful witnesses is that we qualify everything - press releases and press stories often do not.
It might be better to say that it is not in their interest to be skeptical in this way. I’m speaking particularly about social scientists here, but it’s analogous to the broader problem of publication bias, particularly toward not publishing negative results.
And while I don’t have a Ph.D. (or any degree) myself, I’ve spent years seeing social scientists up close. My mother is a retired sociology professor, my father was a professor of anthropology, and when my wife and I were first together she spent two years in a sociology doctoral program (she subsequently exited with a “terminal master’s” and got a second master’s in education in order to become a public school teacher).
In sociology (which, I should note, I do believe has made valuable contributions as a field even as I point out its many weaknesses), there is a bias, or a set of biases, that go beyond just not wanting to publish negative results. There may be a few rogue sociologists out there, but in the main they will not set out to undermine what Stephen Pinker calls the “Standard Social Science Model”. Behavior can never be seen as innate or rooted in biology; individuals from the bottom of the socioeconomic ladder can never be culpable for their lack of achievement or antisocial behavior; Western values can never be “privileged” over others.
Fundamentally, sociologists (and, I believe, social scientists more broadly) are not setting out to simply map the landscape of social forces and patterns the way a biologist maps the genome. Such a disinterested approach would, I believe, reveal many fascinating findings. But all findings have to fit within that SSSM worldview, and must be directed toward a goal of improving the human condition rather than just describing it (and sociologists do not deny this whatsoever–it is something they are proud of about their field). It is a laudable impulse, but IMO it is not consistent with a strict application of the scientific method.
Thus there is a rampant tendency toward the post hoc fallacy, especially in cases where embracing that fallacious mindset offers an optimistic notion of how the world can be made a better place with the application of research findings to interventions to solve social problems. Hey, look: there’s a correlation between the number of books in a two-year-old’s home and their SAT score 15 years later. Great: let’s go around to poor neighborhoods and fill their homes with books. Teenagers who are suspended from school are more likely to be in prison five years later? Okay, we’ll just stop suspending anyone for any offense–that’ll keep those kids from ending up in prison. What’s next?
Oh, right: what’s next is putting some kind of heavy restrictions on e-cigs. Sure as shootin’, that’ll keep kids from smoking cigarettes. :rolleyes: