But why can’t they be solved? Is it some defect in our knowledge of Math, or is it a defect in math itself, or is there some deep hidden metaphysical meaning to this?
What do you mean they don’t have names?
But why can’t they be solved? Is it some defect in our knowledge of Math, or is it a defect in math itself, or is there some deep hidden metaphysical meaning to this?
What do you mean they don’t have names?
Names, as in named after the mathematicians who discovered them (or a formula for them, etc.), like Avogadro’s number, Fermat’s number, Bob’s number, whatever.
Yes, yes, and yes. I’ll write more tomorrow.
**
There are an uncountable number of real numbers, and a countable number of strings of finite length over a finite alphabet. So no names for most of them.
If you’re really interested in these sorts of questions, I would recommend Pi in the Sky by John Barrow, and Indiscrete Thoughts by Gian-Carlo Rota. Both are suitable reading for mathematicians and non-mathematicians alike.
ultrafilter, you have less than five hours before the “tomorrow” you referred to is no more. I, for one, am looking forward to reading your responses to Sacroiliac’s questions. I could go back to your posts in the GD threads “Mathematics: Invented or Discovered?” and “What does the Incompleteness Theorem imply?”, but I don’t want to overwork the hamsters doing a search.
*Originally posted by amore ac studio *
**ultrafilter, you have less than five hours before the “tomorrow” you referred to is no more. I, for one, am looking forward to reading your responses to Sacroiliac’s questions. I could go back to your posts in the GD threads “Mathematics: Invented or Discovered?” and “What does the Incompleteness Theorem imply?”, but I don’t want to overwork the hamsters doing a search. **
All right, I’ll see if I can’t go through this in some semblance of sense.
Sacroiliac asked three questions about why there are problems we can’t solve: Is it due to some defect in our knowledge of mathematics? Is it due to some fundamental defect in math itself? Is there a deeper metaphysical issue?
The answers, in order, are “yes,” yes," and “I don’t know.” Metaphysics has never been my strong suit–I know I said “yes” last night, but I realized I don’t have anything to back that up with.
So I’ll address the first two, cause I know about them. Before you go too far, take minute to read my old essay dealing with large numbers. It’s only mildly relevant here (mainly for notation), but it’s interesting.
The fundamental flaw in our understanding of mathematics, and our ability to understand mathematics is due to the finite nature of our brains and lifespans. For any n, there are more theorems that require n bits to express than can be expressed in fewer than n bits. Pick n to be large–say, B(B(B(6)))–and the length of a theorem in bits probably exceeds the number of possible arrangements of particles in a discrete-space universe with very small mesh. Furthermore, as the proof of a theorem requires more bits to express than the theorem itself, even shorter theorems may be impossible to prove in the lifespan of our universe.
So there’s that. But what if we had infinite time, and infinite memory, and were pretty damn smart to boost?
We’d still be limited. Back in 1931, Kurt Gödel managed to show that any sufficiently strong[sup]1[/sup] system has true but unprovable statements. One example would be along the lines of “This program, labeled P, will run forever when given input Q”. For arbitrary values of P and Q, there is neither a proof of the statement nor a counterexample.
Gödel managed to do this by coming up with a formal equivalent of the liar paradox (“This sentence is false”). His version reads more like “There is no proof of this sentence”. If it’s true, we have a true statement without a proof. If it’s false, we have a false statement with a proof. Either way is a pretty bad obstacle.
Details of his work are interesting, but probably not worth going into here.
Then in the 1970’s, Gregory Chaitin got the idea of using a slightly different paradox, known as the Berry paradox. His work is related to Gödel’s, but ended up being a bit more extensive. You’re better off reading about it here.
Chaitin’s work is nowhere near as widely known as Gödel’s (although both are well-known in the mathematical community), but he’s managed to convince a few people that he’s demonstrated that randomness does exist, and shows up in arithmetic, of all places.
I think that’s a pretty complete answer. I’ll stop now.
[sup]1[/sup]: See chapter 3 of Mendelson’s Introduction to Mathematical Logic for a precise definition of “sufficiently strong”.
Originally posted by ultrafilter
Before you go too far, take minute to read my old essay dealing with large numbers. It’s only mildly relevant here (mainly for notation), but it’s interesting.
I don’t want to resurrect a thread that’s been dead for 7 months, so I’ll make my comments here. You mentioned in the linked thread that most experts believe that the solution to Graham’s problem is actually 6. (Some sites refer to the huge upper bound as Graham’s number, so to avoid confusion I’ve adopted the unambiguous term “solution to Graham’s problem”.) MathWorld’s article on Graham’s number says that in 2003 (after your OP in the other thread) it was shown by Exoo that the solution to Graham’s problem must be at least 11. That discovery must have been a huge shock to all those experts who believed that the solution to Graham’s problem would be 6. How did the mathematical community respond to Exoo’s findings?
I don’t know. It’s interesting that a tighter bound has been found, but it’s still a pretty wide range.