What do mathematicians do if a proof turns out to be wrong?

Let’s say there’s some fundamental mathematical theorem which many other theorems depend on through numerous levels of indirection.

What would mathematicians do if it turned out to be wrong? How would they untangle that huge twisted web? Has it happened in the past?

By “wrong”, I mean that all the proof-checkers missed a case or some subtlety, long after the theorem was ostensibly confirmed.

Publish a paper pointing it out.

BTW, Proof Markup Language (http://inference-web.org/2007/primer/) is attempting to formalize the machine readable representation of proofs, including tracking of dependencies in the semantic web. Eventually, something like this could be used to ‘untangle the web,’ until that promise is realized, there would probably be a lot of manual work, tracking down the problems and fixing them…

In reality, it would be very unlikely that an incorrect theorem would persist to that extent.
Mathematicians love to find and report problems with others’ work, someone would certainly notice it either in the formal review process or after reading the published paper.

Famous examples are two proofs of the Four Color Theorem which each took eleven years to discover the mistakes in them. Later the theorem was proved for real:

If I remember correctly from my days in college (specifically an Abstract Algebra class), when a proof was wrong, you’d get marked down on your assignment. Then, the professor would lose your midterm score and claim you didn’t take it, so you had to rush home to your house and find the scored test in a stack of papers from the term, rush back and show him the score and then he’d say, “oh. Hm”. If you had enough of the proofs wrong, I suppose you had to take the final. I didn’t have to take the final.

Well, back in 1901 there was Russell’s paradox, which exposed a flaw in the foundations of mathematics (set theory) as it had been developed to that time. What happened was that Russell and other mathematicians redeveloped those foundations on a more logically consistent basis. So it has happened in the past, and mathematics survived.

Any proof that can go any length of time without someone noticing a mistake is almost certainly very complex. Something that complex won’t become a fundamental theorem for a long time because of the learning curve. In the end, anything that becomes a fundamental theorem is examined so long and closely that the likely hood of it being wrong is effectively zero.

No. Things like that formal language already exist (and have done since de Bruijn’s work AUTOMATH back in the 1960s): Isabelle and other proof assistants have massive libraries of formalised mathematics, and you can work out what proof depends on what lemma using that (and even display the web graphically).

For instance, here’s a graphical representation of the theory dependencies of Isabelle’s vector calculus development. Here’s an Isabelle theory file developing properties of the inner product.

It should be noted, whilst these tools have been around for nearly 50 years, mathematicians hardly use them: they’re far more popular with computer scientists.

If the author discovers it (that is fairly common) he will publish an erratum. It happens. More often there is a gap and you fill it. If another person discovers it, he will generally notify the author and the latter will publish a correction. Or you can just publish it. I have a paper titled, “On an error of X”, where X was a world-famous philosopher-logician. I didn’t even discover it, but original claim was so badly described that it was virtually impossible to work out what was intended. In one interpretation, the claim was wrong and the person who discovered it showed me a counter-example. On another interpretation, the claim was correct, but the argument given was so bad it was fatuous. The Proof I gave was fairly complicated.

Here is a much more interesting incident, much closer to the OP. There was a “theorem” published in, I think, the forties. IIRC, it was called Hamburger’s theorem, but I could be mis-remembering. It had been used extensively, so I was told. About 25 years ago, a colleague of mine who was an expert in the area got a letter from a student in Denmark saying that he had been unable to follow the original proof and, in attempting to find a proof, had found a counter-example! My colleague read it carefully and agreed with the student. The student ended up doing a PhD with my colleague and is now a professor in Copenhagen.

I do not know what happened to the people who, over the years, used it without, evidently, verfying the proof.

The guy who proved Fermat’s theorem made a mistake and didn’t realise it until he presented it.
He later corrected it and got it right.

Quoth Hari Seldon:

In fact, most of the Prime Radiant is composed of corrections by latter-day psychohistorians, and you have to find and correct an error to be allowed to become a Speaker.

(sorry, I had to)

I think the OP is looking for examples of where a proof is truly wrong and other things were based on it, not proofs that were reproven to be correct.

Even if a theorem is truly incorrect, it’s often still true if the hypotheses are strengthened. And those stronger hypotheses may still encompass the useful applications of the theorem.

Mathematicians distinguish between an error and a gap. An error is a step that is actually incorrect, while a gap is a failure to fully furnish all details (or enough that a competent mathematician can fill in the holes). Wiles left a gap that took him a year and the help of a former student Richard Taylor to fill. He and Taylor published a separate paper establishing the results needed to fill the gap. I sometimes wonder why it isn’t call the Taylor-Wiles theorem (or even Wiles-Taylor, although mathematicians almost always do these things alphabetically). It is true that a much greater part of the work was Wiles’s, but then a very important piece was done by Ken Ribet and he gets no credit. Generally, the person who fills in the last detail gets the credit, but this is Taylor and Wiles.

Right after the original argument was made public (it was never published) a number theorist I know remarked that he believed it because Wiles had never published an error. Incidentally, gaps and errors in papers are incredibly common. I have had many gaps published, but no (as far as I know) errors; maybe no one has ever read any of my papers. But I knew well one incredibly powerful mathematician, now deceased of whom we would say that every paper began with the correction of the errors in his previous papers, an assertion he never denied (although it was certainly an exaggeration). But my thesis advisor’s first published paper contained a serious error and he never got over it.

LOL

An interesting case is that of Louis de Branges and his proof of the Bieberbach Conjecture. De Branges announced that he had a proof of this theorem, which is quite important. A lot of mathematicians were reluctant to even start reading the paper he had written, which was quite long, and which seemed to have some errors in it. De Branges is no nutcase. He’s a professor at Purdue, but he has a habit of announcing that he had a proof of some important theorem which, after long study by other mathematicians, turns out to be mistaken. Finally he persuaded a group of mathematicians to study his proof. They worked through it, fixed the mistakes in it, and decided that it was correct. Only after that point were other mathematicians willing to study it, and now it’s completely accepted:

This is reasonably common. A mathematician announces that he has a proof of a theorem. Other mathematicians work their way through the proof. They sometimes have to fix small problems. Sometimes they find the proof is just wrong. Sometimes they find that slightly different conditions are necessary for the theorem to hold, so the theorem is different than originally supposed. Sometimes they find that the theorem has been correctly proved.

Yes, thank you. Why is everyone giving me examples of proofs which were incomplete and then filled in? I’m talking about disasters where huge amounts of literature are based on a result which turns out to be completely false to the point where many, many papers become worthless.

It looks like the closest answer I’ve gotten in this thread is Russell’s Paradox.

Because those sorts of disasters don’t happen. Even Russell’s Paradox didn’t affect anything outside of mathematical logic, and it didn’t affect much in there either.

Actually, if you go back far enough, you can find disasters like the OP is asking about. Back in the days of the Pythagoreans, a lot of math was based on the assumption (not theorem, but folks didn’t care about that as much back then) that all numbers were rational, and it came tumbling down when a rogue Pythagorean proved that the square root of 2 wasn’t, in fact, rational. From what I understand, it wasn’t until Euclid’s work with similar triangles that the mathematics of proportion was able to be re-built. Note that Euclid did not prove that the assumption of the Pythagoreans was correct after all, like what happened with the flawed and later fixed proofs of the Four-Color Theorem, but instead provided a completely different framework for working with the same kinds of problems.

But that’s not really an error in a proof (or is it?) It’s more like a discovery of a contradiction within a mathematical system.

And those are inevitable once you get beyond Boolean logic as Godel demonstrated.

Source: Logicomix
Disclaimer: I understand basically none of the preceding. Comments welcome.