By the way, having spent a lot of my professional working life as a computer systems programmer, I am pretty sure that any program longer than a couple of hundred lines of code almost certainly contains at least one bug.
So I regard those big computer ‘proofs by exaustion’ with extreme scepticism.
There are potentially chicken-and-egg issues, but I see no reason that a computer verifying a proof should not be more reliable than a human checking a proof:
Sure, we can call it a ‘trick,’ but the trick is to construct a sentence that asserts its own unprovability, moreover such that, at least in principle, we could compute such a sentence and write it down, though it would probably use up lots of paper.
No, I am saying that it is known that there exists a computer program that will halt if and only if the Riemann hypothesis is false. Or, more to the point, I am saying that if the Riemann hypothesis is false, then there exists a proof that the Riemann hypothesis is false. Not claiming that if it be true, then there must exist a proof that it is true.
I’d agree that a computer is probably less likely to make a mistake when working on a large dataset.
But there is a danger of complacency: “the computer has done the computational heavy lifting, so at least we can trust that.”
We tend to take it for granted that a computer will do ‘exactly’ what its program specifies. But that assumes perfect hardware, which is not necessarily the case. I’ve worked on system software very ‘close to the metal’ and I can remember a few counterinstances. For example, an infuriatingly elusive TLB replacement bug in a particular SPARC processor which would, very occasionally, put 256 bytes of zeros into the middle of a data stream. Race condition, of course. Took me weeks to track that down.
And that’s without getting into the issue that the programs the computer is running are of course written by humans. I stand by the assertion, from long and painful experience, that any program longer than a few hundred lines of code almost certainly contains at least one bug.
But surely we are now back to Turing’s halting problem? You could run the program for infinite time, and the fact that it has not yet halted still does not constitute an actual proof.
Yes, I remember ideas of ‘formal verification’. Haven’t heard much about it recently?
The issue seems to be that an exhaustive formal verification of a computer program gets exponentially longer the larger the program. And then, as you say, there’s a chicken-and-egg issue: how do you prove that the formal verification itself is correct?
That is very interesting. I will have to brush up my pure math quite a bit to be able to follow those arguments properly! Good for the brain, of course.
There are some differences between the Continuum Hypothesis and the Riemann Hypothesis. Obviously, the first one is a formula in the language of set theory, while the second is [equivalent to a formula in] the language of arithmetic.
In terms of complexity, the RH requires only a universal quantifier (i.e., it is equivalent to a \Pi_1 formula), while the CH is \Sigma_2, which requires an additional level of quantification.
That’s right. This is the difference between computable (recursive), and merely computably enumerable (recursively enumerable), sets of numbers.
What he did was to demonstrate how, in any given system, to construct such a statement, and then how to use a different statement to prove it true. Roughly speaking, Gödel’s statement can be expressed as “This statement cannot be proven”: If the statement is true, then by its own truth, we know that it cannot be proven, and if it’s false, then by its own falsity, we know that it can be proven, meaning that the system we’re working in can prove false statements. The tricky part, of course, was how to make logically-rigorous self-referential statements, but he showed that in any system at least complicated enough to encompass arithmetic, there was a way to make it possible.
And yes, there are a lot of interesting problems that can be reduced to the Halting Problem. This fact is of limited usefulness, given that the Halting Problem is itself unsolvable, and it serves little purpose to reduce a problem to a harder, or impossible, problem. The primary use of that fact is pedagogical, in that it makes it easier for students to intuitively grasp that the Halting Problem is interesting.
Well, it may have some use: if you can rigorously prove that a given problem is reducible to the Halting Problem, you have simplified the issue at least a bit. One solution to solve them all (or show them to be unsolvable). With apologies to Tolkien.
On the other hand: if the Riemann Hypothesis were proved, how many other questions would fall into place?
I suspect the Goldbach conjecture? How about the Collatz conjecture?
Generalized Collatz-type problems are known to be \Pi^0_2-complete, so even worse than the halting problem; see Kurtz + Simon.
Anyway, we may or may not “suspect” things; but they require careful analysis. There was the result that the generalized Riemann hypothesis implies the “ternary” Goldbach conjecture, but the latter is now proven unconditionally.
Eh, not really. A problem that’s reducible to the Halting Problem might be hard… Or it might not be. Another example of a problem reducible to the Halting Problem:
“There are no even prime numbers greater than 3.”
I could easily write a computer program that halts if and only if that statement is false, so it’s reducible to the Halting Problem. But it’s also a Really Easy Problem, because anyone can easily come up with a proof of it through other methods.
Now, if you can reduce some problem to a known-easy problem, that’s useful, because it shows that the problem is also easy (at least, as easy as the reduction combined with the other problem). And if you have a problem that’s known to be hard or impossible, and you reduce it to some other unknown problem, that’s also useful, because that lets you know that the other problem is also hard or impossible. But reducing an easier problem to a harder problem doesn’t mean much, and an impossible problem is always going to be a “harder problem”.
Bad wording. I should have said “perhaps the Golbach conjecture”… etc.
As I’ve said, I am not a professional mathematician, just an interested spectator.
I’m not familiar with the notation you are using for the Collatz conjecture: do you have a link for the ‘Kurtz + Simon’ work you mentioned?
Basically, they reduce a certain kind of abstract computing machine to a Collatz problem, which shows that the Collatz problems are at least as hard as determining the behavior of those machines (which include Turing machines, for instance).
I will add though, that I have spent a long career as a ‘senior computer scientist’, or at least programmer, often working on system software at the bits-in-registers level.
Plus I started as an electronic engineer, so I know the hardware side of things very well too. So I have a rather sceptical view of alleged computer proofs.
The probability of at least one hardware or software bug in any large system is significant, I would say from my experience.
Possibly but there are a couple of things in their favor. First algorithms can be re-programmed and re-run independently by other researchers to verify, secondly if the computer is checking every single case but accidentally misses a few the chances that the sole counter example was one on which the program had a glitch is probably pretty small.
My main problem with computer proofs is that they don’t really advance the field. For the most part we mathematicians aren’t interested in the solution to one particular problem we are interested in the techniques and generalized findings that got us there. So the mathematical insight that reduced the problem to a finite set of scenarios that can be checked is very interesting. The actual checking of those scenarios, meh a nice bit of trivia.
Out of curiosity (on several levels), I asked ChatAI to give me some examples of true but unprovable statements other than the Continuum Hypothesis:
Certainly! Here are a few more examples of true but unprovable statements in mathematics:
Goodstein’s theorem: This theorem, formulated by Reuben Goodstein in 1944, pertains to a specific sequence of numbers known as the Goodstein sequence. Goodstein’s theorem states that every Goodstein sequence eventually terminates at 0. Although this theorem is true, it cannot be proven within the standard axioms of Peano arithmetic.
Paris–Harrington theorem: This theorem, proven by mathematicians L. Paris and L. Harrington in 1977, asserts a specific statement in Ramsey theory, a branch of combinatorics. The theorem states that a certain combinatorial statement, now known as the Paris–Harrington statement, is true but cannot be proven using the axioms of Peano arithmetic.
Kirby–Paris theorem: This theorem, established by mathematicians Richard Kirby and Jeff Paris in 1982, concerns a particular combinatorial game known as the Paris–Harrington game. The theorem states that the first player in the game has a winning strategy, but this fact cannot be proven within the axioms of Peano arithmetic.
These are just a few examples of true but unprovable statements in mathematics. Gödel’s incompleteness theorems, as mentioned earlier, provide a general framework that shows the existence of such statements within various mathematical systems. The presence of unprovable truths underscores the limitations of formal systems and the intrinsic complexity of mathematics itself.
I didn’t understand any of it of course, but was still pleased to learn that the CH is not the only true but unprovable proposition in math.