Godel's Incompleteness Theorem

Basic arithmetic cannot be described fully.

That’s why everybody has been saying that complete and consistent both must be considered.

No. It is a way of saying that if it is undecidable, we can never prove that fact. In this respect, it is just like the halting problem for Turing machines. If a program has an undecidable halting problem, we can never prove that fact.

Another undecidable problem that we now know is undecidable is the continuum hypothesis which states that every subset of the set of real numbers either has cardinality of the real numbers or of the integers. There are now known to be models of set theory in which this is true (actually Goedel proved that) and models in which it is false (proved by Paul Cohen).

If Goldbach is false, then you can prove it just by providing a counterexample, but if it’s true, then just providing an example (of which there are many) doesn’t do anything. Now, there might be some clever way of proving that it’s true, but if so, nobody’s found it yet… and there might not be any way of proving it.

Put another way, Gödel proved that any sufficiently-interesting system must contain statements that are true, but despite that, cannot be proven true. And for all anyone knows, the Goldbach conjecture might be one of those statements.

A (first-order, arithmetical) sentence like the Goldbach conjecture is Π[sub]1[/sub], that is, of the form ∀n f(n), and if any such statement is independent, then it’s true, as explained above: if it is false then there is a counterexample. For Π₂ statements like “this computer program prints out infinitely many outputs” and “P = NP” that reasoning does not apply.

The point is that I feel like there is a difference between arithmetical statements like the Riemann hypothesis, which should be either true or false, and a statement like the Continuum Hypothesis where (given its independence from usual set theory) one should not think of it as having a definite truth value.

I think I’m having trouble understanding what “decidable” means. Should Chronos’ statement that “if the Goldbach conjecture is undecidable, then it must be true” actually be “if there is proof the Goldbach conjecture is undecidable, then the conjecture must be true”? Because it seems to me that the statement “the conjecture is undecidable (even without proof) implies it is true” itself implies a contradiction because a conjecture cannot be undecidable if there is a proof that it is true.

(As an aside: this is what happens when you try to write mathematics in natural language–a muddled mess. I wish I still had the mathematical skills to write that out formally. Alas, I do not.)

There is a confusion of terms here and if we are talking about formal proofs it might be more clear to use the word “independent” (from a given theory), which means it is impossible to prove that it is true and impossible to prove that it is false.

In computer science, “decidable” means “there is no algorithm to decide the answer to a yes/no decision problem”. The problem could be whether or not there exists a proof of a given statement in a given computably axiomatizable theory.

This is correct.

Your quoted sentence is of the form “if A, then B”, where A is false. That makes it a logically true statement.

Right. If it were undecidable (and therefore true), you could not prove it.

You’re confusing “it’s true” with “there is a proof that it is true”. The whole point of Gödel is that those are two different things. Maybe Goldbach is true. And maybe, despite it being true, there’s no proof of it.

Another nuance to consider is, provable in what system? One of the remarkable things about Gödel’s theorem is that it’s constructive: That is, not only does it prove that, in any given (sufficiently-interesting) system, there are statements that are true but not provable, but it shows how to construct such a statement for any such system. That is to say, given system S, it shows how to construct a statement G in that system, then proves that statement G is true in that system, and then proves that statement G cannot be proven true in system S. This may look like a contradiction, in that it both proves that G is true and proves that G cannot be proven, but the key is that the proof is not constructed in system S, but in a different, more complicated system.

Let me try to explain. The Goldbach conjecture is that every even number is either prime or a sum of two primes. So 4 = 2+ 2, 6 = 3 + 3, 8 = 3 +5, 10 = 5 + 5 and also 3 + 7. This last illustrates one apparent fact about the conjecture: there seem to be more ways of doing it the larger the even number is. Most important fact: either the conjecture is true or there is an even number that is not the sum of two primes (since no even number > 2 is prime). But if there were such a number that fact would be decidable–just try all smaller pairs of primes. It follows that if there were such a number the conjecture would be decidably false. Thus if it is truly undecidable, it must be true. In this case, decidable and false is impossible. It further follows that if the conjecture is undecidable, we can never prove that fact, since that would decide it.

There are things for which a proof of undecidability is conceivable or, at least, not contradictory. An example is the twin prime conjecture. Twin primes are primes that differ by 2, such as 3 and 5, 5 and 7, 11 and 13,… There are fewer and fewer the further out you go. There are conjectured to be infinitely many such pairs, but no one knows and a proof of undecidability would not settle the question.

I hope this helps. Incidentally, it is based on the assumption that every statement in arithmetic is either true or false, a claim denied by a small group of so-called intuitionist mathematicians, who systematically reject proof by contradiction.

It might also help to consider a different (though actually closely related) problem, the halting problem. Given a sufficiently-interesting programming language (all of the actual languages commonly used by real-world programmers qualify), and a program written in that language, will the program eventually halt? For some programs, the answer can easily be shown to be “yes”, and for some programs, the answer can be easily shown to be “no”, but there are others for which the answer isn’t easy, and in particular, there are some programs which will genuinely never end, but for which there is no possible way to know that.

But one nuance that’s often overlooked is that real computers have a finite amount of memory, and hence a finite number of possible states. On a real, finite computer, every program will eventually do one of two things: Either it will end, or it will repeat a state that it’s already been in. And if it repeats a state that it’s already been in, then at the next step, it’ll repeat the state after that, and so on. Thus, if you run a program for a number of steps equal to the number of possible states in the computer, if it hasn’t ended yet, it never will, and so you can always determine whether a program will end…

…except, how are you keeping count of the number of states you’ve visited? Doing so requires just as much memory as the original computer had. So in order to test whether a given finite-memory computer with a given program will halt, you need to use twice that amount of memory. In other words, given a computer and a program, you can determine whether it’ll halt, but you can’t do it using the computer that was given.

There have been some great posts up-thread that clarified a bunch of this half-remembered college era stuff for me. Thanks to all.

Several folks have used the terms “arithmetic” or “arithmetics” without further definition.

The wiki Arithmetic - Wikipedia is a decent treatment of the ordinary arithmetic familiar to schoolkids. But I didn’t find an article on the more generic or abstract notions carrying the same name.

I for one would appreciate it if one of the learned folks could give a capsule summary of what constitutes the idea of “an arithmetic.”

Arithmetic deals with the natural numbers, so “an arithmetic” would refer to a formal axiomatic system designed to express the properties of “natural numbers” and operations such as addition and multiplication. The famous classical example is Peano arithmetic. In Peano’s treatment, roughly speaking, he introduces a symbol “1” (or you can make it “0” if you prefer) and a function S which produces the “next” natural number, so if n is a number so is S(n). Then it is assumed that 1 = S(n) is false for all n, that S(m) = S(n) implies m=n, and the axiom of mathematical induction.

I should add that addition, multiplication, and ordering can be defined in terms of S, and as often as not the axioms are written in terms of those notions.

The difference between intuitive arithmetic, and formal arithmetics, is of course the use of formal mathematical logic.