Godel's Incompleteness Theorem

I’ve been given this brief description of Godel’s Incompleteness Theorem:

“But if I were to give the theorem in a nutshell I would say that if we have a list of axioms which we can enumerate with a computer, and these axioms are sufficient to develop the basic laws of arithmetics, then our list of axioms cannot be both consistent and complete.”

Can someone unpack this for me please? (in simplistic terms).

Have a watch at this: Gödel's Incompleteness Theorem - Numberphile - YouTube

Thanks, that was somewhat helpful but I’m still hazy about what is meant by ‘consistent’ and ‘complete.’

By consistent do they mean you just apply the same rules over and over?

By complete do they mean we can’t know if these rules will always work?

Consistent means that the axioms cannot be used to derive a contradiction. For example, a consistent set of axioms will never prove that both 1+1=2 and 1+1=3.

Complete means that every properly formed statement is either provably true or provably false.

Consistency means “given this set of axioms, I should not be able to make any statement which can be both proven and disproven.”

Completeness means “given this set of axioms, for any true statement, I should be able to prove that it is true.”

Gödel showed that no set of axioms can give rise to a system which is internally consistent. (That is, to make it consistent, you have to add new axioms. But that new set of axioms can not be made consistent without adding more axioms, and so on.)

He also showed that no set of axioms can give rise to a system that is complete. There will always be facts about that system which are true but which can not be proven. (Finding out what those facts are is a substantially more difficult problem. It’s rather hard to know whether “I can’t prove this” is the same as “this can’t be proven.” Ask Fermat.)

First of all, what you need is a formal system strong enough to effectively axiomatize (a certain fraction of) basic arithmetic. Here, the formal system is the ‘list of axioms’: you take a (finite) set of symbols, and some basic rules to combine them (a grammar), in order to identify well-formed statements (not every combination of symbols is well-formed; for instance, in arithmetic, x + y = z is well-formed, but +=xyz isn’t). Those symbols together with the grammar form the language of the formal system.

Then, you take some set of such grammatically correct formulas, which you take, in some sense, as premises for your further work—these are the axioms. For Gödel’s theorem to apply, this axiomatization must be effective—that is, there must exist an algorithm or computer program such that if it is run, it lists those axioms (and only those axioms). Note that that doesn’t necessarily imply that there must be a finite set of axioms: axioms may be given in the form of axiom schemata, i.e. rules according to which axioms are formed. Ordinary arithmetic contains such an axiom schema in order to formalize induction over the natural numbers.

Finally, you need rules of inference: syntactically, those are replacement rules, such that you can generate new grammatically correct statements from old ones in such a way that if the premises used were true, then so must the conclusion be.

With this apparatus, then, you can start with your axioms, apply to them the rules of inference, and derive well-formed formulas which logically follow from the axioms. These are then theorems of the formal system, and the process of deriving such a theorem is called a proof.

Such a formal system may have several properties. The ones of interest in Gödel’s incompleteness theorem are consistency and completeness. Consistency is the property that for any given formula in the language of the formal system, not both it and its negation are provable from the axioms. Any formal system for which this is the case is called inconsistent. Completeness then means that for each of the statements formulated in the system’s language, that statement or its negation can be proven from the axioms.

A complete and consistent system then is one such that for every statement that can be formulated in its language, exactly either that statement itself, or its negation, can be proven. In the early 20th century, there was a hope that there might exist such a system for all of mathematics, pushed most forcefully by David Hilbert: a set of axioms such that every mathematical theorem can be deduced from them.

Now on to the question of what it means to ‘develop basic laws of arithmetics’. Well, basically, we know there are axiomatizations of certain sub-fields of mathematics that are both consistent and complete: the theory of real closed fields, for example, or Euclidean geometry. But clearly, there are mathematical theorems that aren’t proven by these theories, simply because they can’t be expressed in their language.

And as it turns out, once you get your system to the point that it can express certain statements about arithmetics (either directly, as in systems intended to axiomatize arithmetics, or indirectly, in the form of an ‘encoding’), you can do a neat trick: you can encode statements about the system in statements made by the system.

The reason for that is what’s called a Gödel numbering. With this procedure, you can associate a unique number to each formula within the system; then, you can use the system itself to prove that these numbers have certain properties, which correspond to the formulas having certain properties, such as ‘being provable in the system’. Using this, you can then create formulas that talk about whether formulas have a given property, e.g. ‘formula f is provable’.

I’m not going to go through the details here, but essentially, it’s then possible to create a formula within the formal system that claims about itself that it isn’t provable within the formal system. Then, we can conclude that, if the system is consistent, system can neither prove that formula, nor its negation, and hence, cannot be complete.

This is a bit technical if we’re limiting us to consistency; but if we additionally assume that the formal system is sound—that is, that every formula that is proven by the system is true—then it’s immediate: for if it were to prove that formula, then the formula would be false, and the system wouldn’t be sound; consequently, it can’t prove that formula—but then, there’s a true formula it can’t prove and hence, it’s not complete.

As a caveat, you should be careful regarding what you read about Gödel’s theorem on the internet; there’s lots of woo out there. If you’re serious about wanting to understand its implications, I can highly recommend Torkel Franzén’s Gödel’s Theorem: An Incomplete Guide to Its Use and Abuse.

Also, it’s a bit misleading to talk about ‘Gödel’s incompleteness theorem’, since there are two; the one usually more discussed is the first one. The second theorem essentially states that systems meeting the qualifications outlined above cannot prove their own consistency.

That’s semantic completeness. For Gödel’s incompleteness theorems, the relevant notion is that there should not be a statement in the system’s language such that neither it, nor its negation, is provable from the axioms.

There are plenty of systems that are both consistent and complete; they just aren’t able to formalize certain statements about arithmetic. There are also systems that are able to formalize these statements that are complete: just add all formulas in the language of the system as axioms. However, those will trivially not be consistent. And there are also systems that are consistent, but incomplete: Peano arithmetic can be proven to be consistent in ZFC-set theory.

A bit of a side track, but Raymond Smullyan, a recreational logician well known for popularizing the logical word puzzles with knights (who always tell you the truth) and knaves (who always lie to you,) wrote a book about Godel and incompleteness, called “Forever Undecided.”

His central metaphor for the book, in knight and knave terms, is that on the island of knights and knaves a stranger comes up to you and tells you, “You will never know that I am a knight.”

This basic premise gets him off onto in-depth discussion of what it means to reason, deduce, and know at various levels. If that sounds at all interesting to you then I definitely recommend it.

The basic axioms of truth table logic are consistent and complete. But they are not sufficient to do even elementary arithmetic. I even know a statement that is formalizable in elementary arithmetic and provable in a more powerful system (epsilon_0 arithmetic, for specialists) but not provable in elementary arithmetic. Curiously, each individual instance is provable since it asserts that a certainly sequence that appears to be growing incredibly fast, eventually becomes 0 and so any individual instance can be proved by completing the calculation. But it will get very large numbers in the meantime. But the proof that every instance has that property is known not to be doable in Peano arithmetic.

There have been corrections to the erroneous statement that all axiomatic systems can’t be consistent and complete, but there’s more to it than “some are”.

While there are an immense number of “arithmetics” that are consistent and complete (and limited to some degree), you can’t prove they are consistent and complete within that system. You have to use a more powerful system to do that. But there’s a cap on how powerful of a system you can have that is considered “practical” in some sense. That is, all proofs can be checked in a finite (but unbounded) amount of time.

The formal systems that contains “all” arithmetic/logic that people were developing 100+ years ago hit this limit. There is no higher system that could be used in some sort of “practical” way to prove consistency/completeness.

There are, of course, still higher purely theoretical systems but like all the others they have the same limits on their own *internal *ability to prove properties about their own system.

Logic systems are like boxes. To say something about the box you have to be outside it. At some point you get to the biggest box that you can actually use.

Thank you for replying everyone. I will now go away and think about what has been said.:slight_smile:

You know, Gödel never really finished either of those two theo…

(All right, I’m goin, I’m goin.)

Presburger arithmetic can be proven to be consistent, complete, and decidable within its own axioms. This doesn’t violate Gödel, however, since Presburger arithmetic is specifically weaker than Peano arithmetic.

But if a given set of axioms never contradict themselves why would additional axioms be needed? If you add more axioms it’s creating a new system, isn’t it? Basic arithmetic never seems to contradict itself so why are more axioms needed, given that basic arithmetic is all you need to use?

IANAM and I do not have a sufficient understanding to pretend to be able to explain it, but I heartily recommend Gödel, Escher, Bach: The Eternal Golden Braid by Doug Hofstadter. It addresses this in a level of depth that is astonishing for a book written for the layman. It is quite easy to understand if you’re willing to stop and think periodically, though it’s a big book that will take a while to read.

A system that never contradicts itself is fine, but in that case there will still be statements that can neither be proven nor disproven. If you want to say anything about those statements, you need to add new axioms.

On the other hand, a system which does produce contradictions will continue to produce contradictions, no matter how many axioms you add. And even if you start with a system without contradictions, if you add axioms, if you’re not careful you might introduce contradictions.

On the bright side, it’s at least not possible for a sufficiently-complicated system to be both inconsistent and incomplete, because once it’s inconsistent, you can prove (and disprove) anything you want.

As I’ve mentioned before, Douglas Hofstadter’s book Gödel, Escher Bach is basically a rumination on and presentation of Gödel’s proof (the works of Escher and Bach, being frequently highly recursive, offer illustrations of the basic idea of self-reference).

However, judging by the responses to a recent thread on the book, saying “Just go read Hofstadter” isn’t going to help. I think most people find him as impenetrable as Gödel.

In any event, there are plenty of books on Gödel’s Proof:

https://www.amazon.com/s/ref=nb_sb_noss_2?url=search-alias%3Dstripbooks&field-keywords=Godel's+Proof

Addon to the OP’s question, but in the Numberphile video the mathematician says something along the lines that (and apologies for my non-technically accurate summation) if one can prove a theorem to be unprovable, then that counts as a proof, since it shows that there’s no countercase to the theorem. Has anything ever, so far, been proved that way?

Whether that works depends on the nature of the theorem. For instance, take the famous theorem that the sum of the angles in a triangle is 180º: In the system of Euclidian geometry without the Fifth Postulate, that theorem is undecideable, but that doesn’t mean that it’s actually true. It turns out that there are actually multiple different things that could be called “geometry”, and in some of them it’s true, while in others it’s false, but the first four postulates are consistent with both sorts of geometry.

On the other hand, if the Goldbach conjecture is undecidable, then it must be true, because if it’s false, then there must be some number that’s a counterexample, and you could prove it false just by pointing to that number.

Isn’t that another way of saying it’s decidable?