As tomndeb and wolfpup pointed out you are not offering any better solution than the ones already mentioned by me and other researchers. BTW regarding your (finally!) political point there is a lot of weasel words there in that article and what it seems clear to me is that the government should not had just concentrated on the saturated fats, but also on the substitutes. What I do understand is that even the ones that are claiming that therefore saturated fats are good are wrong as in reality the substitution of saturated fats just gave us a replacement that was not AFAIK recommended, nor it means that saturated fats are now clean.
In fact what you pointed out was indeed an example of what you do think is wrong; namely thinking that using one recent study overturns everything. That is most likely the media getting it wrong as usual. While it should give scientists pause, it does not mean that all that was found before is automatically bad.
Food news and research has also a lot of issues but then again even the cites I made also touched on this, science is hard but it is also not recommendable to drop everything just because of a few new papers come telling us something different than what many scientists think about issues like saturated fats. AFAICU what the Wall Street Journal writer (That has a book to sell about how saturated fat is good for you) does not realize is that two wrongs (saturated fats and not so healthy substitutes) does not mean that the first wrong is an innocent thing.
This seems to indicate a misunderstanding of the purpose of peer review. Peer review doesn’t verify the result of the study or conclusions in the paper; it simply validates, insofar as possible, the methodology in the study or experiment and assesses whether sufficient data is presented to support the conclusions. No conclusions in a paper, even with peer review, should ever be taken as the final word or even as authoritative on a topic without being substantiated by independent research or experiment. And independent means being reproduced by a different research team using differing experimental setup or test population. It is neither the fault of the original experimenter nor the peer review process if policy is drafted based on a single, non-replicated result; that would be a fault of policy makers or their flappers who don’t understand research methodology.
We have a perfectly good system of research, except that replication studies are often viewed as makework rather than critical to the process of new discovery. Now, there are individual failings within peer review, such as failures to validate credentials or question the supportability of conclusions, but that falls on individual reviewers or editorial polices of specific journals.
As noted above, conference proceedings receive little if any review, much less formal peer review, and in fact the function of conferences (other than an excuse to travel and network) is to get informal feedback from the research community on research prior to formal publication and see if anyone else is doing yet unpublished work on a similar topic.
As for the difficulty in replication in psychology, at its best the field doesn’t qualify as a real science, and at its worst it is pure voodoo. The difficulties in reproducing results are inherent in the almost inevitablity of subjective nature of research combined with the large number of variables in assessing any real world behavior.
Anyone with the slightest knowledge of a field knows which conferences count and which conferences do not. Tenure committees sure as hell do. Among the things they check are citation counts and acceptance rates. Academics running a conference like really low acceptance rates, since this increases the prestige of the conference. I bet the conference with the fake papers has an acceptance rate of near 100%. There are some workshops with high acceptance rates, but these don’t count much for tenure, and are designed to let people get information out there for feedback early. I’ve started a few of these, and would be shocked if anyone took papers from them as gospel truth.
Not just IEEE. My daughter is a new management professor, and as a post-doc she knew the journal pecking order perfectly. Some journals are good for tenure, some are good only for running up a publication count.
So, papers in some conferences would be considered worthless pretty much, while those in others are at least possibly of high quality. The same goes for journals, but there aren’t that many really crap ones - though the open access ones might be. They are not a thing in my field. Actually one publisher of such sent us mail asking if we wanted to submit papers from our conference to them, but after looking their zillions of journals over we passed, figuring our authors would not be interested and not wanted to advocate for them.
Which just shows you shouldn’t trust politicians with science. Climate change shows that. Remember, there are journals set up by crackpots specifically to publish their crackpottery. The papers get peer reviewed by other crackpots. That’s why journal quality counts.
You still don’t get it. The goal of reproducibility is not that all results will be reproduced, but that enough information is provided so the experiment can be repeated. If anyone cares, that is. No one can afford to repeat any but the most significant experiments enough times before publishing them.
It is just like books. On the average a book that gets accepted by a professional publisher will be of higher quality than a self-published one. But there will still be stinkers.
Anyone doing public policy based on one paper is asking for it. I know nothing about nutrition research, and so won’t comment.
But, do you have a better method?
The only thing I can think of is paying reviewers for their time, rating the reviews, and sending them out to the better reviewers. As a reviewer I think this is a great idea, as an editor and program chair I think it is an awful idea. Budgets aren’t big enough for this to happen.
I assure you, I have more examples of the deficiency of peer review by itself than you do. I did some grant reviewing in my field for NSF, where they flew us to DC and a bunch of experts reviewed a bunch of proposals. We were very serious. Then I got signed up for the small business grant program, forced on NSF by Congress, which was a farce. I only did it because I got to visit my daughter in college on the government’s dime. Then they asked me to be the lead in an area I knew nothing about, I said no, and they fired me, which was fine with me. I’ve never seen such disregard for subject matter expertise in journal reviewing.
I don’t know about other fields, but in mine books are not peer reviewed at all. I’ve never done one, but I have done chapters, and no one ever looks. I have been paid by publishers to review book proposals, and even manuscripts, but I was the only one and I would not call what I did peer review.
I believe that articles in the New Yorker get fact checked far more rigorously than books in the popular press. When my wife writes technical books they are often reviewed by a subject matter expert, but again this wouldn’t qualify as peer review, which requires at least three reviewers if not more.
True, and my point was not made clearly. Generally, books are the final presentations of vetted ideas while papers provide the working process for developing the science. In cases like The Bell Curve, where Herrnstein and Murray explicitly avoided publishing first in papers where there ideas could be reviewed, going straight to book form to try to make their case while avoiding the scientists.
My thought exactly. Peer review is a tool, not a panacea. It helps the scientific method along, but does not guarantee any sort of objective truth. The process of science will weed out the incorrect or less-correct … eventually.
I was concerned that some people would put too much stock in books. A few years ago there was a batch of articles when people found that published science and history books (non-technical) had gasp errors.
The Bell Curve is a work of advocacy. Some books are surveys of an area, kind of an advanced textbook, and others are collections of papers/articles. It is not clear how well reviewed those are.
I review books for an IEEE magazine so I come across all types.
I have to add one more thing to this post where I characterized the OP as essentially a baseless attack on science that aggravates an already-problematic anti-science attitude in American society. I’m still irked by the ridiculous statement that “If you follow any political argument you will, sooner rather than later in most cases, come across someone stating that a certain thing is a fact due to some peer reviewed paper. Invariably, the person who questions the peer-reviewed science is labeled all sorts of things though the most used these days seems to be ‘denier’.”
I’ve only heard the term “denier” used in the context of the climate change so-called “controversy”, where it’s (quite properly) applied to those who deny the scientific evidence for anthropogenic global warming. Is the OP suggesting by cherry-picked isolated examples from psychology etc. that the overwhelming scientific consensus on AGW should be challenged? Because that’s the thing about science – you can always find isolated failures of process and shortcomings of fact, but to challenge an overwhelming body of evidence from many different lines of investigation over many decades supported by every major national science body of every major nation on earth takes very strong corroborating evidence indeed, and not just gratuitous whining about “science” because you don’t happen to like its conclusions.
The supreme irony in all this is that there are indeed discreditable bad papers on the subject of climate change, and as far as I’m aware virtually all of them have been pushing a denialist agenda and have been universally condemned by the mainstream scientific community. And they tend to show the red-flag warning signs of bad papers:
1. Publication in vanity journals or low-quality or low-impact journals, or authors with a known history of fraud
Example: The Soon and Baliunas controversy erupted in 2003 when this well-known pair of denialists published a paper in the small journal Climate Research purporting to show that the earth had been substantially warmer in the Medieval period and that warming was mostly due to solar cycles. The paper was so badly misleading and methodologically flawed that it was soon clear it should never have been published, and several editors including the editor-in-chief were forced to resign.
Ironically, the intrepid pair later published a revised and extended version of the paper in an even worse journal, Energy & Environment (not to be confused with the reputable Energy and Environmental Science):
When asked about the publication in the Spring of 2003 of a revised version of the paper at the center of the Soon and Baliunas controversy, Boehmer-Christiansen said, “I’m following my political agenda – a bit, anyway. But isn’t that the right of the editor?”
Several climate scientists had this to say about E&E:
According to a 2011 article in The Guardian, Gavin Schmidt and Roger A. Pielke, Jr. said that E&E has had low standards of peer review and little impact. In addition, Ralph Keeling criticized a paper in the journal which claimed that CO2 levels were above 400 ppm in 1825, 1857 and 1942, writing in a letter to the editor, “Is it really the intent of E&E to provide a forum for laundering pseudo-science?” A 2005 article in Environmental Science & Technology stated that “scientific claims made in Energy & Environment have little credibility among scientists.”
2. Papers that are off-topic from the journal’s primary subject matter
A good example of this is the journal Remote Sensing, which deals with technology used by satellites to measure different aspects of the earth from space.
One interesting sidebar to satellites that measure thermal emission brightness from which global temperature can be inferred is that one of the major centers for aggregating and analyzing this data is UAH, the University of Alabama at Huntsville. And despite being a reputable center in itself, the data publication has been managed by John Christy and, until recently, Roy Spencer. Christy is a legitimate scientist but a borderline denialist who strongly downplays the anthropogenic component of AGW, and Spencer is just simply an outright lunatic denialist and crusading anti-government libertarian (also an evolution denier, BTW!). The extensive adjustments to which satellite readings have to be subjected to accurately infer temperature and the political inclinations of the principals at UAH have, at least in the past, rendered some of the UAH data sets somewhat suspect although today I think they’re largely conformant with RSS and surface data set trends.
In any case it’s not hard to imagine that UAH had a strong connection to Remote Sensing, and sure enough in 2011 Roy Spencer and another lunatic managed to get the journal to publish a paper alleging that climate models were flawed and severely overstating future temperature projections – a paper that had almost nothing to do with the the journal’s primary focus and which the editorial board was poorly equipped to vet.
And indeed it turned out to be complete denialist crap, so obviously unfit for publication that the journal editor, Wolfgang Wagner, was forced to resign.
It’s actually gratifying how short-lived bad papers tend to be and how quickly they get exposed. The story publicly emerged soon after:
The paper, published in July, was swiftly attacked by scientists in the mainstream of climate research.
They also commented on the fact that the paper was not published in a journal that routinely deals with climate change. Remote Sensing’s core topic is methods for monitoring aspects of the Earth from space.
Publishing in “off-topic” journals is generally frowned on in scientific circles, partly because editors may lack the specialist knowledge and contacts needed to run a thorough peer review process.
So to summarize all of the above:
TL;DR: There is a small amount of bad science infesting climate research. Almost all of the bad science is trying to push the denialist agenda by dishonestly and intentionally distorting facts for political purposes.
It’s even worse than that. The vast majority of Congresspeople, who by and large come from public service, business, and law backgrounds, have essentially no grounding in science, engineering, or statistical methodology, and therefore generally pick technical support based upon how well it agrees with their preconceived notions and positions about policy, rather than letting the technical facts shape policy. In the 114th Congress there are “one physicist, one microbiologist, one chemist, and eight engineers (all in the House, with the exception of one Senator who is an engineer)” and a smattering of other technically-oriented professions represented. Given the increasingly technical nature of legislation and regulatory policy, it would seem appropriate that at least key committees in science and technology areas such as energy policy, pollution and conservation, space and miiltary technology, et cetera, are led by Congresspeople with a technical background, but this is often not the case where committee memberships are more often dictated by seniority, party politics, and favoritism.
The Congressional Office of Technology Assessment (OTA) used to provide layman-level summaries on technical issues that were produced by a cadre of technical experts selected by a bipartisan committee. Operating since 1972, it was favorably regarded in the science community for providing nonbiased technical guidance about topics pertaining to legislation and policy that were under consideration. However, many of OTA’s positions on technology development, climate, and especially space development pertaining to the Strategic Defense Initiative were in conflict with the policy goals of the increasingly conservative presence in Congress, and the ‘Newt’ Gingrich-championed “Contract with America” defunded the office. The Congressional Research Service, while still in existance, has suffered substantial drawdown due to reductions in funding as Congresspeople in both parties have become more reliant upon internal research staff and privately funded ‘think tanks’ that are largely transparent lobbying and policy advocacy organizations often funded by industry groups and “charitable” institutions like the Koch family foundations with blatantly partisan agendas. In such cases, ‘crackpot’ journals are often sought out as ostensible support for technically unsupportable positions, and are often specifically published to support policy decisions in the same fashion that the Tobacco Institute funded and supported the publication of ‘research’ undermining the harms of tobacco smoking.
No amount of peer review is going to change this, because the peer reviewers themselves are selected to facilitate and perhaps even deliberately bias the process to produce the desired conclusion. What is needed in this case is an independent body which performs evaluation of science in the public interest without partianship, e.g. an officially supported version of the Federation of American Scientists. But then, this would not sit well with politicians (on both sides) who want to advocate or promote legislation which appeals to their demographic but at odds with fact.
The problem of politicians with vested interests is a real one and a very difficult one. But the kind of organization you’re talking about already exists as the National Academies of Sciences, Engineering, and Medicine. Abraham Lincoln established a Congressional charter in 1863 to found the National Academy of Sciences as an independent adviser for the U.S. government on science and technology matters. It’s a private institution but operates under its original Congressional charter and receives 85% of its funding from the federal government. It has frequently been called on by Congress for advice and reports on controversial matters of public policy.
Here’s a datapoint relevant to the thread. John Lott, self-appointed expert on gun violence and gun control, has received rave reviews from politicians like Gingrich, Rubio and Cruz for writings like his ‘study in peer-reviewed journal, Econ Journal Watch, “Explaining a Bias in Recent Studies on Right-to-Carry Laws.”’ Is this an example of pro-NRA lies making it through a “peer-review” process?
Post snipped. (Yeah, this is bringing back a fairly old thread but…)
Reproducibility is the heart of science. Pointing out that there is a problem with reproducibility isn’t attacking science, it is defending science.
The reason I begin this back up is that the BBC just publisheda story on this. Nature has apparently introduced a reproducibility checklist that will be required for all new papers. Good for Nature on admitting to the problem and taking steps to address it. (Hey wolfpup! Is Nature making a baseless attack on science for admitting that there is a replication problem and taking steps to rectify it?)
Additionally, Nature published a comment by Jeffrey S. Mogil and Malcolm R. Macleod proposing no publication without confirmation. It looks to be an interesting idea.
Nature also published a story on cancer study reproducibility. The upshot is that 2 out of 5 were mostly reproducable, with the other three either not reproduced or the results were 'uninterpretable '. Note, not a single study was fully reproducable.
From the BBC acticle:
and from the same article:
It is encouraging to see Science and other scientists taking the problem seriously.
And it is once again an issue mostly with biomedical research. The points made before were that while it would seem wasteful it is with more research that the truth comes out. (Paradoxically what the point of science is. In reality one should criticize many groups in the medical field for not investigating if a single or few papers should led all to conclude that something is the last word in medicine). It is really a bit different regarding issues like evolution and climate science where they are at it for hundreds of years getting repeated results and confirmations already under their belts.
Of course, it should had been noticed that regarding more iffy research like medical or physiological research that what they recommend was not refuted nor mentioned that that would be a bad idea even by wolfpup, it is just that an effort is still there to apply that criticism to all science.
Probably too late in the thread to respond directly to the OP, but personally I’d like to see a governmental agency that simply recreates and tries to reproduce other people’s studies. It would be a great job for new scientists and would put validation in the hands of a disinterested third party. Or, better even, a third party that is dedicated expressly to disproving the things they are given.
With such an organization, I wouldn’t expect anyone to give much credence to a paper until it had gone through replication.
You either were not reading or not comprehending the arguments that were already made long ago in this thread. The BBC story and the Nature story and commentary all deal with the same subject of cancer research and drug efficacy, and it’s one that I already commented on in my post #18 back in August, when I said that your OP consisted of …
… deeply flawed cherry-picked examples of papers in psychology, which is among the least rigorous of all the soft and non-rigorous social sciences, and medical research, which is a very productive yet tremendously complex field especially in notoriously difficult areas like testing cancer drug efficacy in clinical trials.
(emphasis mine)
The area of clinical trials and confirmation of drug efficacy, especially for cancer drugs and certain types of psychotropic drugs, is an area of special challenge for a variety of reasons. It’s well known that over time the Kaplar-Meier curve that indicates survival rates occasionally shows a marked decline in drug efficacy compared to earlier confirmed results, and there’s a hypothesized syndrome generically referred to as the “decline effect” with considerable controversy as to whether it’s real or due to selection bias or something else.
It is, however, utterly wrong and misguided to use this sort of special-case esoterica as a generalized attack on science and the scientific method. Arguments about whether or not some particular results have been independently replicated are well and good if the results are novel and untested, but it becomes just ignorant nonsense when it’s used as an argument against fundamental truths that have been artificially hyped-up to be supposedly “controversial” just because some find them politically inconvenient, like biological evolution or anthropogenic climate change. This is not a matter of a paper or two that needs confirmation – these are well established fields of science in their own right, among the most robust and well-tested of any. The latest set of IPCC reports on climate change are supported by literally thousands of peer-reviewed papers building on tens of thousands that came before them. The very few that contradict them are the ones that should be suspect, and indeed as I showed in #28, these outliers invariably turn out to be poor work if not outright intentional fraud.
In short, the problem with your assertions is that it unjustifiably encourages what New Scientist has aptly described as the decline of science and reason in the American public perception, a dangerous retreat from reason that is distinctly counter to the public interest, and which leads to such assertions as global warming being a “hoax” or vaccines being hazards inflicted on the public by some secret conspiracy. No one is advocating blind faith in every new paper and every new scientific claim, but we do need to know when to back off challenging well established truths.
Pointing out problems with how science is currently practiced is not “encouraging the decline in science and reason,” any more than criticizing the President means you are advocating anarchy.
Unfortunately, what it does is falsely exacerbate an already problematic public perception. As I’ve indicated here with a number of cites and quotes throughout this thread, there is indisputably a real, tangible, and dangerous problem with regard to the public perception of science in America and the unjustified lack of credibility of important scientific positions in the eyes of many uninformed cynics. To then take what is essentially internal scientific self-criticism on highly focused minutiae and drag it out on public display without proper qualification skews the public perception and is a disservice to the facts.
Whether intentional or not, it’s the old debating tactic of invalid generalization, commonly known as cherry-picking, so often used in climate change arguments: for instance, cite a valid study showing mass increases in certain bodies of ice, and neglect to mention that there are specific reasons for the anomaly and the vast preponderance of polar ice is losing mass.
An even closer analogy is when some years ago, in response to political pressure from conservative groups against the IPCC, the Interacademy Council (IAC) conducted a review of IPCC procedures and methodologies. The IAC report was highly positive and in its press release it lauded an organization “[whose] assessment reports have gained IPCC much respect including a share of the 2007 Nobel Peace Prize. However, amid an increasingly intense public debate about the science of climate change and costs of curbing it, IPCC has come under closer scrutiny, and controversies have erupted over its perceived impartiality toward climate policy and the accuracy of its reports.” The IAC consequently made recommendations for strengthening the management structure and improving the review process which, although remarkably thorough and successful overall, allowed several minor errors to creep in, involving the less rigorous Working Group 2 report on adaptation and vulnerabilities. Andrew Weaver, a Canadian climatologist who has worked with the IPCC extensively, reflected the view of many scientists when he characterized the report as “solid recommendations that people would agree with”.
This did not, however, prevent the usual denialist organizations from fabricating complete fantasies about it, making simply the fact that a review report existed at all to be, by implication, a damning condemnation of the whole of the IPCC and everything they ever wrote; one such site went so far as publish the delusional headline “IAC slams IPCC process, suggests removal of top officials”.
And this is why focused constructive criticism, taken out of context either by those who don’t understand it or by those with hidden agendas, can be damaging and harmful to the public interest and the advancement of science.
I’m not sure whether the problem is with studies themselves or with the way most of us learn about them.
There are all kinds of “facts” and “studies” we learn about through brief snippets on the evening news, or from jokes in late-night TV monologues. Do we usually do any follow-up reading on our own? No.
Almost everyone can tell you “Men reach their sexual peak at 16 and women reach it at 40.” Is that true? I dunno. Where did I learn that? I don’t remember- some “study.”
And there are many “facts” like that, which we learned from some “study” we didn’t actually read and whose veracity we don’t know.
Part of the problem is, a scientific report may be complicated and nuanced. TV news, however, relies on brief summaries and sound bites. So, we may get very simplistic summaries of reports that DON’T reach simple conclusions.