Unsolvable Mathematics Problems.

Well I don’t think it’s “false”, but a question of terminology. To me, brute force is a solution – it’s not an answer, and it’s not a very good solution, but if you know that you have a finite set you could generate, and iterate over, you have a solution (just no answer yet).

Without the rules in chess that end the game if “blank” happens several times over or “blank” doesn’t happen for so many terms, there are definitely chess games that never terminate. To my knowledge, there is no brute force algorithm to determine which side, if any, can force a win in such an “unlimited” variant of chess. It might be solvable, but we don’t have a solution.

Right, well, that’s the sense in which “There is very few (if any) things that we can prove to be solvable, without effectively solving them as part of that proof” is trivial; we know how to brute-force solve any solvable problem (for suitably formalized notion of “solvable”): just keep enumerating through all possible proofs until you find one which settles the matter one way or another. Thus, if knowledge of the existence of an intractable brute-force solution to a problem counts as that problem being solved, whether or not that solution has actually been “carried out”, then all solvable problems have been solved.

What others said here on the subject…

But…

Is it possible that what you were thinking of is that it took a long time to decide (by proof) that pi is trancendental, as well as irrational? it seems to me that in the long view of Mathematics history, it was fairly recent.

  • “Jack”

Well, the Bible says that Pi is three, which is a prime number… :slight_smile:

No, that just means that all problems known to be solvable have been solved. Consider the Goldbach conjecture, for instance: I can construct a brute-force algorithm that will eventually solve it, by stepping through every even number and then stepping through every prime number less than half of that number, and returning the first even number that it finds that isn’t the sum of two primes… Or at least, I can do that if there exists such a number. If there isn’t, then my program will just keep running 'til Doomsday, and I’ll never know whether there is no such number, or the program just hasn’t found it yet.

Right; but why do you say that means “all problems known be solvable have been solved” and not “all solvable problems have been solved”? I mean, I can live with that distinction, but I’m not sure I understand what you mean in drawing it.

Building on your example, let’s consider a slightly smarter brute-forcer; instead of just looking for counterexamples to the Goldbach Conjecture, we also enumerate “all possible proofs”, and check to see if any of them establish the Goldbach Conjecture as true. If the Goldbach Conjecture is “solvable”, one way or another, this brute-forcer will establish it.

(Of course, this depends on us formalizing a particular notion of “solvable”, so that we can enumerate through all the possible proofs in the system corresponding to that notion. But that is always the way… nothing is intrinsically, unavoidably unsolvable, but only unsolvable relative to a particular notion of what counts as a solution)

Just to make my claim clear: I do not claim that the Goldbach Conjecture has been solved. I claim that, if the Goldbach Conjecture is solvable, then it has been solved (in the sense of groman).

Your counterclaim, as I understand it, is that this implication is unsupported; all that can be supported is “If the Goldbach Conjecture is known to be solvable, then it has been solved.”

My response is “What should I take to be the distinction between supporting these two implications?”.

Eh, semantic quibbling aside, there is something interesting to be said about “Things we can demonstrate to be provable but which we haven’t proved”, along the lines of a result of Rohit Parikh’s:

Suppose T is some typical proof system; some proofs in it are short and some proofs in it are jaw-droppingly long. Pick some particular cut-off point and say that everything shorter than that is “feasibly provable”; longer proofs, while still technically proofs, are not considered “feasible”.

Consider the assertion “There is no feasible proof in T which establishes this assertion” [as before, we can use the diagonalization trick to eliminate the direct self-reference while achieving the same effect].

Now, we can enumerate all the feasible proofs in T (note that there are only finitely many of them), and check whether or not any of them actually prove our assertion; thus, we can, by brute-force examination, correctly establish the truth or falsehood of our assertion.

Supposing T is able to carry out the reasoning in this search, it follows that T either correctly proves or correctly disproves our assertion.

But the only way for our assertion to be false is for T to feasibly prove it; thus, either the assertion is true and T proves it, or the assertion is false and T both proves and disproves it (and is thus inconsistent).

So, no matter what, T proves our assertion.

Supposing T is able to feasibly carry out all the reasoning in this post, this means that T feasibly proves “T proves our assertion”.

But if T is consistent, then it must be the case that the assertion is true. Which means T has no feasible proof of the assertion.

Thus, if T is consistent, then T correctly feasibly proves “T proves our assertion”, but it has no feasible proof of our assertion itself.

Thus, we have an instance where we can feasibly prove the provability of a statement, without being able to feasibly prove that statement itself.

Ah, good point. If you construct your framework for proofs sufficiently formally, then you can just step through all proofs. But it still doesn’t get at the heart of the matter: No matter how long you let your exhaustive theorem-generator run, if it hasn’t terminated yet (i.e., found a proof or disproof of Goldbach’s conjecture), then there never comes a point at which you can say that the question is solved.

This is not the case in the chess example, incidentally. Even without the draw-forcing rules, chess is still inherently finite. If the history is not part of the state, then I can put a firm upper bound on the number of states of a chessboard at 13^64, since each square can have only one of 13 different states (I could, of course, find tighter upper bounds, but the important point is, it’s finite). Knowing that a chessboard has a finite number of states, I can then establish an upper bound on how long my brute-force chess solver has to run, and confidently state that after it has run for that long, chess will have been solved.

True, our ability to confidently state up-front an upper bound for the chess brute-forcer is an important distinction.

[Though…

For any program P that halts, I can, in some sense, state an upper bound for how long it takes to halt (e.g., “The (very tight) upper bound is the number yielded by the operation of running P and counting how many steps it takes to halt”). The natural objection is that this doesn’t count; it doesn’t explicitly name a number. But in some sense, it names one just as explicitly as, say, “The number yielded by the operation of taking 13 and raising it to the 64th power”; both cases, after all, are equally abbreviations for a number which would take far too long to write down in standard notation.

Of course, if we don’t have reason to believe that P halts, then we have no reason to believe that “the number yielded by the operation of running P and counting how many steps it takes to halt” is a description of any number at all.

Following this line of thought, the distinction between the ability to put an upper-bound on the search for a resolution to the Goldbach conjecture and the ability to put on the search for a resolution to the chess-outcome problem becomes only that, in the latter case, we already know such a resolution exists, whereas in the former case, we don’t already know this. (But in both cases we know a method which is guaranteed to provide such a resolution if one exists, and we know a method which is guaranteed to provide an upper-bound on the runtime of this first method if such a resolution exists, and so on…)]

In A Beautiful Mind, there’s an anecdote about Nash putting that question on an Analysis midterm (as well as Fermat’s last theorem), because he thought someone might have a better chance of proving it if they were working under a lot of pressure.

Also, to the π-prime guy; it’s true that no one’s proved that e+pi or e/pi are irrational, much less transcendental. Maybe that’s what you were thinking of.

I suspect, though, that e^pi is known to be irrational, since e lends itself so naturally to exponentiation, so it should be much easier to study.

Yes, e^π is even transcendental, by the Gelfond-Schneider Theorem (if e^a, b, and e^(ab) are all algebraic, then b is rational; that is to say, if e^(ab) is algebraic and b is algebraic and irrational, then e^a is transcendental; setting a to π and b to i gives the desired result).

Funny - reminds me of the true story of George Dantzig:

"In 1939, he arrived late to his statistics class. Seeing two problems written on the board, he assumed they were a homework assignment and copied them down, solved them and handed them in a few days later. Unbeknownst to him, they were examples of (formerly) unproved statistical theorems. "

I ran into a new one today: the unique games conjecture. If true, this places some fundamental limits on our ability to approximate solutions to NP-hard problems unless P = NP.

I’ve got to admit the majority of this thread is going right over my head.

I do get the general gist that there are still problems to be solved though, and this leads me to ask - Can these problems be overcome with brute computer force? Will there always be new math’s problems, or is it only a matter of time and computing power before everything to do with math’s is known?

Well, the idea, ivan astikov, is this:

There’s no program which can uniformly give the correct yes/no answer to questions of the form “Does program P spit out ‘No’ when fed itself as input”?

Why? Well, suppose we asked such a program about itself; we’d be asking it “Do you spit out ‘No’ when asked about yourself”? Any answer it gives would have to be wrong.

Thus, we have one family of questions which no program can uniformly solve. The implications of this for your question I leave for you to draw [though I’ll just note that nothing in the above required that by “program”, I meant specifically “computer program”; you can formalize “program” in the manner of your choice and still apply the above reasoning].

No, if something is unsolvable, a solution cannot be reached. Unsolved just means a solution hasn’t been found yet. There are both kinds. I would assume that most of the latter ones will be knocked off eventually.

Some pressing mathematical problems can be investigated with computers, and more power is always helpful (the Pentium bug was identified by someone using computers for pure math research Dr. Thomas R. Nicely) but in general, the computers don’t solve the problems, they help mathematicians have insights into the problems.

However, I think math will never be exhausted - there will always be more problems to solve.

Perhaps one problem is that it sometimes seems like ivan astikov is thinking of all math problems as being something like “Calculate this number”. But much of math is about different kinds of questions; e.g., “Is it true that…?”. Granted, one can always view any question as some kind of calculation of a number if one wants to straitjacket it into that form, but perhaps thinking of things that way is a misleading perspective.

Once you start thinking about all the yes/no questions you might want to ask, you’ll quickly start hitting upon questions that you have no idea how to write a computer program to solve (e.g., “Are there infinitely many integers with such-and-such a property?” Well, it’s not like you could just have a computer program to count them all up and check…).

Of course, for any particular problem, there’s some computer program that gives the right answer to it (trivially; either the program that just spits the word “Yes” to the screen or the program that just spits the word “No” to the screen will give the right answer). But this is totally unuseful. The problem is, how does one come to know that a particular program gives the correct answer to a particular question?

So really, what we want is not that every question is correctly answered by some program. We want a single program that correctly answers lots of different questions. But, as I illustrated above, there are certain families of questions which no single program can give the right answers to all of. It’s not a matter of computing power or anything like that; it’s just for the same reason that you yourself cannot correctly give the right answer to “Does ivan astikov answer the last question of post #60 in this thread with ‘No’?”