What do mathematicians do if a proof turns out to be wrong?

Contradictions within mathematical systems are not inevitable, and in fact are to be avoided like the plague, since a single contradiction anywhere at all in a mathematical system sends the whole edifice tumbling to the ground. What is inevitable for any sufficiently-advanced mathematical system (and “sufficiently advanced” is well past Boolean logic, incidentally) is that either it will have contradictions or it will be incomplete. Since contradictions are so bad, mathematicians nowadays just accept that their systems will be incomplete, which is to say, that there will be some statements within the system which can neither be proven nor disproven.

What would be an example of math built upon this assumption which came tumbling down?

Well, Russell’s paradox was interesting because: a) it caused a major crisis in the foundation of mathematics, showing that Cantor’s naive set theory was paradox laden and b) it almost destroyed one man’s life’s work (Frege), who got the letter from Russell just as his book was sent to press and c) it’s spurred an awful lot of research into mathematics, logic and computer science (in fact, you could say, many of the tools computer scientists take for granted today came directly from attempts to find ways around Russell’s paradox, most notably the lambda-calculus (which itself was paradox laden), and type theory!)

I don’t think Goedel’s theorems implied you’re necessarily going to get contradictions in systems able to encode Peano arithmetic, rather, you can’t prove consistency for these systems within themselves (but can in stronger systems).

I would describe both the discovery of the irrationality of some numbers and the existence of Russell’s Paradox as being instances where early in the formulation of a mathematical subject (Greek mathematics in the first case and mathematical logic in the second case) it was found that the assumptions necessary to correctly formulate the subject were different that what had been supposed before. These assumptions were quickly fixed and the subject quickly went forward towards further development.

Yes, the Frege example is a fine one for how hidden problems in early assumptions don’t bring everything crashing. Even despite Russell’s paradox, his life work didn’t actually come tottering down into dust, did it? We still remember him and his work, salvaged all the fundamental ideas from it into modern higher-order logic, and so on.

Everything Chronos said in response to this is correct, suitably interpreted. But I think it is most enlightening (to understanding of what exactly Goedel demonstrated) to actually see Goedel’s argument. It’s very simple.

Suppose you have a proof system that can be modelled as follows: We’re interested in propositions that assert some property of or relation between some things(s) [e.g., “5 is prime”, “3 is a factor of 10” (hey, they don’t have to be true), “John is gay”, “Tom is Sarah’s father”]. Thus, there will be things called “objects” [what we may pose properties of or relations between], of which certain special ones are “(n-ary) predicates” [thought of as specifying a property or relation that might hold of or between n many objects; e.g., “[Object[sub]1[/sub]] is prime”, “[Object[sub]1[/sub]] is [Object[sub]2[/sub]]'s father”]. Given an n-ary predicate and n many objects, there are two questions one can ask about the proposition that this predicate holds of these objects: is it provable?, is it disprovable? (within this proof system). If these two questions always have opposite answers, we say the system is consistent and complete; otherwise, if they sometimes simultaneously come up “yes”, we call the system inconsistent, and if they sometimes simultaneously come up “no”, we call the system incomplete.

With me so far? Good. One more bit of linguistic structure: We can make new predicates out of old ones by argument-shuffling/contraction/weakening [what I’d call “swizzling”]. That is, from any n-ary predicate, which I’ll notate P(x[sub]1[/sub], …, x[sub]n[/sub]), and any function s from {1, …, n} to {1, …, m}, one can form a new predicate, which I’ll notate \y[sub]1[/sub], …, y[sub]m[/sub] → P(y[sub]s(1)[/sub], …, y[sub]s(n)[/sub]), with the property that this new predicate is provable/disprovable of any Object[sub]1[/sub], …, Object[sub]m[/sub] just in case P is respectively provable/disprovable of Object[sub]s(1)[/sub], …, Object[sub]s(n)[/sub]. (We don’t quite need the full power here of letting s be an arbitrary function, but it seems natural to take it). I’ve perhaps expressed this very confusingly, but all it means is that, thinking of a predicate as having various labelled argument slots within it, one can go through and relabel those slots however one likes to obtain a corresponding new predicate (perhaps even one accepting a different number of argument objects, as there’s no requirement in the relabelling that each new label be used exactly once). For example, if IsBossOf(x, y) is a 2-ary predicate stating that its first argument is the boss of its second argument, then \x, y → IsBossOf(y, x) is a 2-ary predicate stating that its second argument is the boss of its first argument, \x, y, z → IsBossOf(x, z) is a 3-ary predicate stating that its first argument is the boss of its third argument, and \x → IsBossOf(x, x) is a 1-ary predicate stating that its argument is its own boss.

Alright, we’re almost ready to prove the incompleteness theorem. We just need one last key ingredient: We want our language of propositions to be capable of expressing statements about the behavior of this very proof system itself*. To this end, let’s suppose that, for each n, there’s an n+1-ary predicate Disprovable[sub]n[/sub] which expresses the behavior of disprovability in the sense that, for every n-ary predicate P, Disprovable[sub]n[/sub] is provable of P and any Object[sub]1[/sub], …, Object[sub]n[/sub] just in case P is itself disprovable of those objects. (We could also symmetrically suppose predicates expressing the behavior of provability, but it’s unnecessary for our purposes)

Ok, now what? Well, now consider the predicate (\x -> Disprovable[sub]2[/sub](x, x)). It is provable of a predicate just in case that predicate is disprovable of itself. In particular, it is provable of itself just in case it is disprovable of itself. Accordingly, the system [that is, any one which can be modelled in line with this framework] cannot be consistent and complete (for precisely the same reason that if you asked someone “Will you answer ‘No’ to this question?”, they can’t give a correct yes/no answer). Q.E.D.

Well, I’m not sure if, in the end, this was too confusingly expressed to be helpful, but now that I’ve typed it up, I’m not going to delete it. At any rate, if you stare at it long enough, it should help you at least somewhat sharpen recognition of what Goedel’s first incompleteness theorem does, and, perhaps even more importantly, doesn’t imply. [I’ve left out discussion of Goedel’s second incompleteness theorem/Loeb’s theorem, which is perhaps rather the more logically/philosophically interesting one, but which I’d like to clean this presentation up before delving into]

*: This is what people call being “sufficiently advanced” or “strong enough” or what have you, though I don’t care for this phrasing, as it’s a two-way balancing act between the expressivity of the proof system and the simplicity/ease-to-express of the (dis)proof system. (Granted, the “strong enough” talk is usually used along with an assumption that the proof system is to be describable by a computer program (so that, for the balancing, it suffices to require the language be “expressive enough” to express the behavior of computable functions), but this is awfully limiting and somewhat of a red herring in comparison to how general the Goedelian phenomenon really is)

I still think that the closest you will come to your question is that Hamberger’s theorem I described above.

In general, a false result doesn’t get very far before someone uses it and finds a contradiction, then goes back and examines the original proof and finds an error. This happens reasonably often and is one of the things that gives me and other mathematicians confidence that mathematics is consistent.

De Branges has also claimed a proof of the Riemann hypothesis, but no one believes that either. Including the people who went through the proof of the Bieberbach conjecture, one of whom I know quite well.

But yes, sometimes rather sketchy proofs that require a lot of work are published. There is a very important result in number theory that was published in no less a journal than Annals of Math, probably the most prestigious math journal in the world. They were not able to get it refereed; no one could follow it. They finally threw up their hands and printed it as it was too important if true. No one to this day fifty years later, knows if the argument is correct, but the result was finally proved elsewhere.

Could you give us the name and author of the theorem, the issue in which it was published, and the journal, issue, and author in which the other proof was published? I’d like to know this if this question ever comes up again.

That’s an impressive bit of coding btw.

Rather than continuing the hijack, I redirected discussion of Godel’s Incompleteness theorem to Godel: are all undecidable props in consistent math systems subsets of self-referent statements?