A few words here. If this were literally true, it would be difficult to explain how science has made the strides that it has, and why we are not in fact all living in caves still trying to figure out how to light a fire. It would be difficult to explain how we live in a highly advanced technological world that is intimately dependent on science, and why technological advancements are growing at an exponential rate. It’s ludicrous to try to refute the premise that science works, and that the scientific method is the most successful path to knowledge that has ever been discovered.
Here’s how to reconcile the apparent dichotomy between that irrefutable fact and those papers.
The Lancet is a medical journal and the article you’re remembering was just a one-page rumination by the editor prompted by a symposium on the reproducibility and reliability of biomedical research, an editorial that he titled What is medicine’s 5 sigma? [PDF]. The second article you cite was written by a professor of medicine and health research policy, and also a professor of statistics. Do you see a commonality here?
The title of the second article is clearly ridiculous and obviously intended to be provocative, but whether either of those authors explicitly acknowledge it or not, what they’re implicitly talking about and have certainly been highly influenced by is biomedical and pharmaceutical research, fields that have long been fraught with unmanageable complexities and difficulties and which arguably have a greater reliance on statistics than just about any other field of research. A classic example from pharma research is the “decline effect” in which drugs that seem to have high efficacy and successful clinical trials later on turn out to be much less effective, a mysterious reality for which there are probably multiple explanations but no clarity.
The New Yorker article cited actually has a discussion with John Ioannidis, the author of that ridiculously titled paper (“Why Most Published Research Findings Are False”). He attributes things like the decline effect to “significance chasing” by researchers in the early stages of this kind of drug evaluation, which may be partly true.
The point is that some of the unique problems in biomedical research, and some rather hyperbolic attention-getting articles that have been written about them, have a narrow focus and shouldn’t be interpreted as being more than what they really are, such as would be the case if one really took the title of that paper literally.
One example that touches on some of these issues that comes to mind is the famous paper by Michael Mann et al. on global temperature reconstructions for the past millennium. This was the landmark paper that first produced the “hockey stick” graph showing a huge modern-day temperature spike. Naturally, climate “skeptics” got their shorts in a knot and sought to attack the paper as “junk science”.
To make a long story short, the two vectors of attack were (a) that some of the temperature proxies Mann had used were unreliable, and, more damning – and what made me think of that paper here – was (b) a full-on attack on his statistical methods, the cheerleader for which was a climate change denier who was, by happy coincidence, a statistician, something that Mann was not. The argument there was that Mann had applied a technique called de-centered principal component analysis to the data which, it was argued, could be manipulated to produce a hockey stick shape in virtually any data set whatsoever.
So think about that, especially if you’re a climate skeptic disinclined to believe the science: this key paper was reputed to have used bad data and bad statistical methods, and then we have all this other evidence that “Most Published Research Findings Are False”. It’s a trifecta of badness! Would you be inclined to believe the paper?
Probably not. In fact you’d probably laugh at it.
And you would be wrong. Very wrong.
A lot of things happened because of the importance of Mann’s results and the volume of opposition trying to discredit it. One, the National Academy of Sciences conducted a thorough review of all his data and methods. Two, other studies tried to replicate his results, and to address the criticism of bad proxies, results were compared with and without the suspect proxy data. To address the criticism of de-centered PCA, results were presented unprocessed.
The upshot of it all was that the NAS review vindicated Mann’s results. They did not fully endorse his statistical methods but stated they did not affect the results and conclusions. In confirmation, the replication papers showed essentially the same results with and without the suspect proxies, and with and without PCA. Today there is a vast body of new data supporting Mann’s pioneering work, and he is one of the most respected climate scientists working today.
So my philosophy is, unless there is persuasive reason to do otherwise, I trust the science. And if the science is wrong, it will correct itself. In the long run, science prevails over human bias.