Do scientists often admit they were wrong?

Whether it’s in an article, an experiment/observation report or in public, do scientists* often admit: “yeah, I believed A was true but it turns out it’s false/there isn’t enough data to support this belief”?

*Here, we’ll limit ourselves to hard sciences like physics, chemistry, biology, geology, zoology and similar sciences.

Of course. What makes you thing they wouldn’t?

People are often have an emotional attachment to their ideas, especially if they’ve put a lot of effort into forming them, even if they have the best of intentions. Sometimes their reputation and income can be negatively affected if they were wrong. It can simply be humbling to admit you were wrong. Confirmation bias can also set in.

I ask the question because I was just thinking that one of the reasons science is more often right than other knowledge systems is because scientists are more willing to admit they’re wrong; being willing to admit you’re wrong makes you more likely to be right, on average. Then I realized that this premise that scientists are more likely to admit they’re wrong was just a supposition on my part.

What often happens, though, is that a scientist said something very subtle and very tentative and careful.
But then the press and the public make it into this Sweeping Soundbite Statement (SSS). And when this SSS turns out to be not quite right, then who should come out and say so? Scientist or press or everybody who perpetuated that SSS on parties and on messageboards?

Individual scientists can certainly fall prey to this, but peer review is a cornerstone of the scientific method, and guards against this.

It happens, but ultimately, it doesn’t matter much if individual scientists cling to their pet theories in the face of evidence to the contrary - because demonstrating that it’s wrong is part of the path to (amongst other rewards) reputation and income for other scientists.

Yep. That’s why all those scientists working to prove the link between smoking and cancer were so well rewarded and have such a high reputation. Whereas those scientists who were clinging to their pet theories were all paupers struggling for research grants.

Seriously, while what you say might be an *ideal *in science, anybody who has worked as an actual research scientist knows that it is a load of hooey. Scientists stand to gain the most by saying what their sponsors want. It doesn’t matter whether the sponsor is government or private industry, if you don’t tow the company line the odds of being rewarded are slim regardless of the quality of your evidence. And conversely, if you can garner a reputation of always saying what the sponsors want to hear, your odds of being rewarded increase dramatically.

That’s not to say that all research positions are funded via a biased selection process, but nobody with any experience would be naive enough to suggest that someone working for, for example, Greenpeace, who found evidence that humanity should be harpooning as many whales as possible, would have a job next week if he adopted that as his official position.

While many organisations have various measures in place that are supposed to ensure that the positions of their scientists can be expressed in a totally impartial manner, I have yet to see or hear of a real world example of this. In contrast I personally know several scientists and have heard of many others who have seen their careers limited or even lost their positions because they changed their views, based upon evidence.

Getting back to the OP: yes, scientists sometimes admit they were wrong, but for a discipline supposedly based upon evidence it is astonishingly rare. What generally happens in science is that there needs to be a generational change to see a change in consensus, regardless of the evidence. The Old Guard will cling to their pet theories come hell or high water. Most of them do an Einstein, and die trying to falsify the more evidence based theories and never actually accept them.

If you pick up any issue of any major peer-reviewed journal, there’s a pretty good chance you’ll find a “retractions” section, where scientists announce that they would like to retract a paper they had published. This is usually due to some mistake, or, more damagingly, deceit, having come to light. Depending on the cause of the retraction, this can be hugely damaging to the scientists’ career.

Peer review actually makes the problem worse. It makes it easier to bottle up results that disagree with consensus views. The difference in science is that results get around even if they aren’t published in peer reviewed journals. Once other scientists perform experiments supporting a new paradigm, then the pressure builds to change.

Radically different hypotheses take years, even decades to be widely accepted in the scientific community. Some ideas, like Darwin’s, had to wait for senior biologists to die or retire to gain wide acceptance. In the real world, scientists are not Vulcans and are just as stubborn about surrendering preconceived views as other people. Some scientists will easily adjusts to new paradigms, but they are the exceptions rather than the rule.

Peer review bears a similar relation to science as spell checking does to literature. The important part of science is replicating results. If you don’t publish enough data to allow other scientists to replicate your results, then you aren’t engaged in science.

Why Most Published Research Findings Are False
http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124

So how did we get where we are now? Science sounds useless as described here.

Apart from self-preservation issues (i.e. you may lose your funding if you espouse a conclusion that is unflattering to your sponsors), irrationality does make it difficult in some cases to admit that someone else is correct and you are wrong.

One of the most famous cases involved Ignaz Semmelweis. His ground-breaking scientific conclusion? “Doctors should wash their hands before examining/treating patients.” It’s sad to think how many pregnant women may have died because doctors refused to admit that he might be correct.

A second famous case involved Robert Goddard, “the father of modern rocketry.” in the 1920’s several publications printed editorials proclaiming his research and claims regarding rocket propulsion to be so much folly. Nearly 50 years later, a day after the launch of Apollo 11, the New York Times published a correction, finally admitting that Newton and Goddard had both been right all along. :smiley:

I just want to point out that there’s a big difference between stubbornly and blindly clinging to pet theories even in the case of overwhelming evidence against them, and being very cautious about accepting results that are contrary to long-established and functioning theories.

Remember, as the previous poster pointed out, most published results are false. Results that purport to overturn a well-established and working paradigm are even more likely to be false (due to error, chance, overlooked particular condition, etc.). For every Darwin, there are a hundred cold fusion guys. So skepticism (to a point) about wild results is in fact rational.

Scientists are admitting they’re wrong all the time. How do you think any of these constantly emerging new hypotheses ever become established in opposition to old hypotheses, if scientists are incapable of admitting error?

And IMHO, the reason that scientists generally don’t mind admitting they were wrong is not that they’re such exceptionally unfailingly rational and truthful people (which many of them aren’t), but simply that they tend not to believe any recently-established scientific theories all that deeply in the first place.

If a hypothesis is good enough to be currently accepted by most mainstream scientists, while simultaneously being new enough or untested enough that it’s still worthwhile to design research projects to investigate it, the vast majority of mainstream scientists are going to be aware that it’s still rather tentative. And consequently, they tend not to invest a lot of energy or emotional attachment into saying “This MUST be true.”
Yes, very old and firmly-established theories (e.g., Newtonian classical mechanics) will attract a lot of loyalty—and those are the kinds of theories that in fact are very seldom overturned. And sure, it sometimes happens that a majority of scientists in a field will cling to an obsolete theory due to intellectual bias or personal motivations or some other reason. But those situations are pretty rare: in a few hundred years of rapidly proliferating scientific research, there are only about ten or twenty cases where most scientists just weren’t able to change their minds until long after they should have.

Likewise, scientists can become overly attached to their own pet hypotheses or hunches, but most of them AFAICT are just not very attached at all to the currently prevailing “best guess” hypotheses that shape leading-edge research in their field. They have seen a lot of currently prevailing “best guess” hypotheses come and go.

In short, scientists generally don’t make a big deal about giving up the belief that A is true because for most values of A, they didn’t make a big deal about embracing that belief in the first place.

Richard Dawkins tells a story in one of his books that he says stuck with him for his whole life: while he was studying at university he attended a talk by a distinguished professor on a fairly niche biological theory the professor had developed. After giving the talk he invited questions, and one of the students asked a detailed question presenting a strong critique of the theory. The professor looked at the board in complete silence for a few minutes, and eventually said, “you’re right, this theory as it stands is not correct”. Dawkins says the moment moved him to tears.

I want to remember more of the details, but I can’t remember which book it is in, and obviously trying to search for anything regarding Richard Dawkins online only ever comes up with that one topic. It’s a nice story though.

Incidentally though, my impression is that scientists are very reluctant to admit they were wrong. They might admit it over small things, but if it’s a theory they’ve been working on and promoting for 10 years, it’s going to take a hell of a lot to convince them it’s wrong. They’re only human.

The reason new hypotheses emerge over old hypotheses is because the scientific field as a whole starts to accept a different theory. My impression is the way it works is more like: The guy who invented Theory One will still be arguing it’s correct, but the rest of the field starts to move on and accept Theory Two. That guy will continue arguing Theory Two is correct even as the rest of the field starts to move on and accept Theory Three, and so on.

It tends to balance out in the end. Older scientists often do tend to cling to old theories (certainly not always, just more often than younger ones with less experience and less personal stake). But no one lasts forever, and sooner or later a theory which works and gets results tends to take over.

Things are harder when the issue is not one readily subject to testing. It took a long time for the theory of plate techtonics to gain acceptance, because most scientists considered it ludicrous and flat-out harassed the theory’s creator, Mr Wegener, despite the fact he got it pretty close. It took fifty years to demonstrate good evidence for the theory, and that was only because he did get a few good disciples.

Or for that matter, Troy. Well-esablished archeologists often considered the Illiad and the Odessey little more than myth, if not a flat-out literary invention. heinrich Schliemann was (to them) a crackpot with money who wanted to dig in the dirt. Lo and behold, the man goes down to Turkey, accurately locates the appropriate site, and digs up a pile of ancient treasure, including enough evidence to support the view that 'ol Homer really was describing a real war (with some literary improvment). While he made mistakes and wasn’t particularly professional about it, Schliemann did find what he said. Berlin’s archeological society was really, really angry about this and maintained it was all a lie for a very long time.*

*When they finally came around, Kaiser Wilhelm himself held a banquet naming Schliemann as an Berlin citizen, a rare honor, to help persuade him to leave part of his collection to Berlin museums. Among other things, the menu had the bear symbol of Berlin tamely sitting at Schliemann’s feet.

However, we have to also consider that in these cases, a real and convincing demonstration was both possible, and practical with available technology. This is always going to be easier in hard sciences than inthe soft ones. Even today, people are still having fundamental arguments over Linguistics, Economics, and Psychology. Fortunately, there is more room for disagreement in those fields.

I was mostly referencing that paper as a rebuttal to the person who thought peer review was magic. In reality, passing peer review often just means that your paper doesn’t contradict any published results by the reviewers.

Science is about coming up with ideas and trying to figure ways to test them. So you might say science consists of those theories we haven’t managed to falsify yet. However science also has a huge body of knowledge about things we know aren’t true.

I think this is an important distinction. When I read the OP, I thought of scientists admitting they’re wrong about “small things”, which happens constantly. I’m thinking about a scientist doing a study to test one or more hypotheses, and when the data are examined, at least one of the hypotheses is not supported. I’ve been working in research for about 30 years now, and I would say that the scenario described above has occurred in more than half of the studies I’ve been involved with. We always indicate which hypotheses are supported and which are not.

Now, there is a publication bias such that negative findings are less likely to get published than positive ones. So a scientist’s admission that he or she was wrong may not always end up in print.

(One of the most famous experiments in history, however, was a negative result.)

Given the variety of viewpoints presented here, this is probably better suited to GD than GQ.

Colibri
General Questions Moderator

Not useless, just slower than need be.