Per this story it seems to imply that a huge chunk or current research is potentially borderline fraudlent with cherry picked results. Why isn’t this front page news?
In cancer science, many ‘discoveries’ don’t hold up
the same is true for many other medical studies
I thought one of the basic tenets of science was that well done experiments should be replicable. Are the rules different for medical studies?
That must really annoy those cold fusion guys.
Just a personal “bias” perhaps but to be honest these results do not surprise me. There are so MANY variables when it comes to stuff like this I have little confidence that they are all statiistically accounted for properly.
It is very expensive to replicate a clinical study. Independent researchers not affiliated with the original research group may not have the funds to do so, and the original group often has no incentive to do so.
I stay away from human research for a reason.
It’s not just cancer studies.
Yeah, I came back to say this. Many studies are not replicable. Most studies, even if replicable, are never replicated.
Its very hard to get a study published that is merely replicating a previously-done study, and if it is published, its not in a high impact journal. Since most scientists are evaluated primarally on how many publications they have, and the impact of the journals those publications are in, we don’t have incentive to replicate studies. As a scientist, I like to tell myself that finding out the truth is my primary motivation, and that is true to an extent, but i’m also interested in paying my mortgage and my daughter’s day care fees.
We do replicate studies, but usually only if they directly relate to our own research. These replications are not publishesd, but are performed to tell us if we can trust the tests/reagents etc… that previous groups have developed. Many times, we can’t.
Wow.. I had no idea getting reliable data from peer reviewed studies was such a dice throw in medical research.
Well here’s one problem. From the linked article:
Without question there is a problem throughout science (and not any more medical research than any other area of research) that very few labs will expend much in the way of resources doing experiments to replicate the work of others. It just simply is not a way to make your own name and score future funding. (Boy a study confirming that someone else’s study worked a second time is sure to get you a highly thought of journal publication.) That lack of incentive to do studies that attempt to replicate the work of others, the lack of a system that funds it well and that assures that there is no selection bias towards publishing more positive or negative replications if they are done, is bad enough. But one would at least think that a study which is done in a such a way that replicating it is impossible, because the authors are not forthcoming with what they actually looked at and how, would be completely dismissible. And of course that is exactly what the authors of this study have done.
I would adjust that slightly to say it’s very hard to get a study funded that is merely replicating a previously-done study. Getting it published shouldn’t be a problem, though it won’t necessarily be in Science or Nature, it still might be in a top field-specific journal, depending on the importance of the original study.
Also, scientists are usually pretty smart, and think about these kinds of things a lot. What that article said to me was, “hey guys, I’ve bothered to look at this thing we’ve always suspected and our worst fears are probably true.” Publication bias, and the file drawer effect were things we talked about in undergrad research methods classes 20 years ago, and they were old then.
I’m glad this stuff is getting attention, because maybe that will change some of the incentive model, to let better science get done. I personally hope that Internet based journals can have a big impact on this problem. They can create a place to put things which say, “here’s a bunch of stuff we did that didn’t show any significant results, and no traditional journal wants it, but it should be public so that other researchers can learn from our results.”
In my field (human genetics) it’s often the case that something can’t be published the first time unless there’s a replication included.
Now, I do think there are things going on which border on (or cross the line) of fraud. For instance, studying whether a new drug has an effect on condition A. No effect is found on condition A, or even B-E, but it does seem to have an effect on condition F, and then writing that condition F is what was being studied all along, and ignoring the lack of effect on A-E is not good science.
All electrons, for example, are more or less identical to each other, so they will react the exact same way in two different physics experiments with nearly identical conditions. No two people are alike, so it’s extremely unlikely they’ll respond to treatment the exact same way. Therefore, you’re never going to get the exact same results in two different medical studies, no matter how you control the variables.
I get that, but I’m assuming that a methodologically competent study that the top peer reviewed journals deem of publishable quality would be designed to have enough participants that individual differences would not pull the study results so far out of kilter that the results (per the articles) are non-replicable in the large majority of studies.
From an experimental design perspective this doesn’t (on the face of it) seem to be a particularly good or useful way to investigate human physical issues if the large majority of attempts to certify the results of a study deemed to be of the highest quality fail far more often than they succeed.
From a layman’s perspective it seems like borderline crappy science. However, in this thread and in the tone of the article there seems to be a lot of shoulder shrugging by highly intelligent people saying that this is simply the lay of the land.
The thing that we don’t know what the authors considered “landmark.” From the article it seems like they are counting any relatively small study that showed a “potentially ‘druggable’ target” as “landmark” and believing that the failure of having others bothering to replicate it before investing in drug development based on the unreplicated one small study is a serious problem with replicability, rather than with their understanding of science.
I don’t see it that way. I see it as a serious problem with pharma’s understanding of science that they would invest much before seeing replication, even if they have to do it themselves. (Unless the advantage of first to market is such that they risk too much by waiting.) Of course any pharma doing replication studies won’t share them with their competition either …
A real landmark study (from my standard of landmark) will get a high profile attempt to replicate it. And a true major item will have several studies that some conflicting findings to be parsed through. The crappy science is not the fact that initial findings are often not able to be replicated; the crapy science was by those accepting small single unreplicated studies as being somehow so “landmark” as to invest a drug development program based on it and then being shocked that they could not replicate the original findings.