Ah, don’t be to hard on yourself… I’ve been studying Introductory Real Analysis by A. N. Kolmogorov & S.V. Fomin (translated by Richard A. Silverman), which includes a little set theory. Trust me, you allowed me to see the forest from trees with your comments. The main reason is trying to understand the theoretical underpinnings of Ito calculus. It’s not as hard as I thought understand the concepts conceptually (because it seems like it pretty obvious after you “grasp” the idea). The hard part is trying to get to that point with all of the false starts, wrong turns, etc. The big picture you provided allows me to make some course corrections along the way…
When you say you “study,” what exactly do you do?
Where do you stand on the role of computer assisted proofs in mathematics? Proofs involving gigabytes of data only checkable by automated means - proven or not?
How much do you read outside of your area? For example, there seems to be a movement within computer science to define formal denotational semantics of programming languages via category theory and other forms of abstract mathematics. Do you read around the mathematical sciences to see whether you’re research may be applicable there, or does this not concern you?
Which of the remaining Millennium Problems do you anticipate to be solved first?
For your first and last questions, I imagine if you asked X many mathematicians, you’d get X many answers. (Well, with X not greater than 6…)
My own views are that it’s distressing how few mathematicians have familiarity with computer proof systems, and that, ideally, all proofs should be routinely formalized, typed up, and verified. Unfortunately, such systems as exist today are far from practical enough for this to be so routine; they involve too much tedious boilerplate and low-level shuffling in the formalizing of most real mathematical proofs. Still, that is the goal which we should be striving towards.
Of course, what you’re really getting at is the business of the proof of the four-color conjecture, and similar things. The controversy over that proof basically breaks down into two parts: first, we should we trust the results of the computer program involved, when there could have been a bug in the program, the compiler, the hardware, etc.? All this is true (bugs are a fact of life), but not really specific to computers: human-written mathematical proofs are just as likely to contain bugs, human computations can suffer glitches (who hasn’t experienced a situation in which they keep making the same arithmetic error repeatedly?), etc. So if you’re looking for 100% reliability, you’re not going to get this from computers, but you’re not going to get it from humans either; it’s too lofty and unrealistic a notion of what proofs really do. Numerous are the times that a proof has been considered valid by mathematicians, only to have been discovered to be flawed much later. The best you can get, as far as proof goes, and this holds for math or anything else, is something in a form that can be subjected to repeated scrutiny, looking evermore for the bugs, and eventually feeling confident enough that there aren’t any. And this works as well, or really better, with computer-formalized proofs than with human proofs. If you’re worried about compiler bugs or hardware bugs, you can keep writing new compilers and testing it out on new hardware. If you’re worried about bugs in the actual program of the proof itself, that actual program will be something human-written, and just as manageable and amenable to human inspection as any other sort of proof a human can come up with.
The second part of the controversy with the four-color theorem is the business of the brute-force computation being so “large” as that no human could ever hope to step-by-step verify it. Again, though, this isn’t really a problem specific to computers; a proof created in many parts by many human mathematicians, like the classification of the finite simple groups, will suffer similarly. It’s always desirable to bring a lengthy and wild proof down to size and make it more manageable, both in terms of increasing our trust in its lack of errors and in terms of gaining something which can more readily impart “understanding”, but it doesn’t mean we have to reject entirely the wild proofs.
Perhaps the problem is with the binary view of proof acceptability. A proof either is valid or it isn’t. Well, certainly, with a fully formalized argument, in a Platonic sense, this is true, but it is a notion rather detached from the way mathematics is really done: almost no real mathematical proofs of any length are so fully formalized (though, as I said, I would like to see a greater degree of formalization and mechanical verification), and, both of those that are and those that aren’t, any process of proofchecking available to us will always be fallible. There’s just no way around that. So rather than a binary view of proof acceptability, we can grade proofs by how much trust we put in the assumption that they are bug-free: some proofs are very clearly buggy, some proofs are small and graspable and viewed as almost certainly bug-free, and then various other more hirsute proofs lie somewhere on the continuum of reliability (“Oh, I’m pretty sure that their argument was legitimate, though I wouldn’t bet my house on it”).
So… those are my views. Obviously, others will feel (very) differently.
I’m personally very concerned with connections between abstract research in logic and various potential applications in the denotational semantics of programming languages, but you’ve just happened to get lucky with where my interests lie. I think most mathematicians do read around a bit, into areas somewhat adjacent to theirs, not necessarily only to see if their work has applications, but also just out of pure intellectual interest in matters similar to those which they’ve found interesting enough to work on.
I have no idea which of the remaining Millenium Problems will go down first; the one I have the most familiarity with is P ?= NP (often called the greatest open problem in logic, and thus the one of most concern to me), and the one that’s been around the longest with the most attention paid to it is the Riemann Hypothesis (often called the greatest open problem in all of mathematics), but I’m really not aware of any rumblings anywhere that are on clear paths to blowing up into solutions to any of them. (That is why they’re open problems of such note, I suppose. No one has any real idea of how to go about proving/disproving any of them, so any work towards doing so is expected to involve seminal developments of new and productive constructs.). I’m kinda hoping the Goldbach Conjecture (which was, incidentally, bundled with the Riemann Hypothesis by Hilbert) gets solved soon, personally, though I think there’s even less rumblings of progress on that problem than on any of the Millenium Problems. It’s just been around so long, though, and now that Fermat’s Last Theorem is down, its time has come.
Well, that and the connected but much much older twin prime conjecture, which has been open for some embarrassing 2300 years.
I have a set of set theory questions. As I understand it, Godel proved that within a complex enough mathematical system, there are things that are true that can not be proven from within that system. Feel free to tell me I am completely wrong. Despite having some real mathematical training, I’ve never learned any real set theory, and my understanding of set theory is pretty much limited to the newspapers.
Question 1: What is a mathematical system? Does this imply that there are things that are true about real numbers that can only be proven from an encompassing system? And what would such a system be? The complex numbers? Or the set of general linear matrices? (I think I am asking what makes one system encompass another.) Are there examples?
Question 2: Does the existence of things that are true that can’t be proven imply the existence of experimental mathematics? For example, it is not clear to me why the usual application of math to physics couldn’t be reversed in special cases. Obviously this “truth” would be suspect, but not necessarily more so than a conjecture.
Thanks.
Yes, Nava, if you noticed, inanswer to your question, “what is set theory?” I linked to the wikipedia article on Set, not on Set Theory. Read that one.
All right. Now it’s my turn to ask a question. What’s your deal Indistinguishable? You seem to know a bit more about large cardinals than the average bear…
I beleive your username is a pun on ineffible, indescribable, subtle, etc., no?
I see. Double the amount of homework problems you do. This is how to study math: solve problems.
I suppose computer proofs are okay. I don’t really know much about them. But do you mean stuff like the 4-color theorem, or the stuff that Doron Zeilberger does with Maple?
I don’t read much outside my area, but I did take various classes before I started my thesis. More importantly, if during my research, it becomes clear that I need to learn more about area X, then I read more about area X.
I know very little about most of the Millenium problems.
This is correct. For example, take the natural numbers with only addition and multiplication. Then there is a statement about this system which is true but not provable. Naturally, I am not able to produce such a sentence, as that would implicitly provide a proof of its truth.
Not neccesarily. And I think the notion you are describing is a submodel.
No. I think fast computers imply the existence of experimental mathematics
So, let us presume there is something true about the natural numbers with only additon and subtraction that we have not been able to prove using number theory. Might it be provable in another branch of mathematics, such as a special case of a set theory proof? To me it would seem that you could turn such a special case into a number theory proof, so that set theory just provides a helping hand. Which brings up a related question. Are there proofs that somethings are unprovable, without proving said thing is false, thus leaving open the possiblity that it true? (Yes, I understand proving something is not false is proving that it is true.)
Aah, poor choice of semantics on my part. Anyone who has solved some probability questions has undoubtedly done some “trials” in that sense. How about physical mathematics, instead? I was thinking that perhaps finding physical models for mathematical systems might help find mathematical truths without, or in conjunction with, experimental mathematics. Suppose, I have very good reasons to believe a set of difficult to solve equations are a good model of reality. For, example, QCD. I don’t know that anyone has proven the existence of a solution under the conditions describing a proton. Physicists don’t tend to do existence proofs. All protons know what mass to be (or, rather, what mass distribution to follow), so clearly there is a solution. Its existence might be provable, it might not.
The night before an exam I go over what is going to be on the test, like the vocabulary and formulas and what not. Then I have a two hour break before my math class so I go to the tutoring center and we do the different kind of problems that will be on the test.
Now, this is minutes before the exam and I am still not able to actually go in and do the tests.
It’s frustrating. Sometimes I think I should go all the way back to algebra and what not and work my way back up.
And I’ll try that “do more math problems” thing. Couldn’t hurt I guess.
Apparently pretty far. Set Theory and Boolean Algebra have applications to Semantics and (it seems to me) Topology has applications to Syntax.
For the OP: The last math course I took was in community college (got an A). I’m interested in Calculus. What’s the best way to go about getting versed in that without enrolling in night school (I have a day job)?
Take a look at the fall '05 single variable calculus class from MIT. Following along with that, and picking up the background that you need, is probably the easiest way to do it on your own.
Yes, to your last question, this happens all the time. Especially in set theory. But first let’s clarify what we mean when we discuss “unprovable” and what we mean we discuss “true” and “false”.
The way Goedel’s incompleteness theorem works is that you say, for any given consistent formal logical theory (of suitable complexity; e.g., able to interpret and prove various very simple and obvious things about the natural numbers), there is some statement which can’t be proved or disproved from that particular logical theory. However, the statement could (and would) be provable from all kinds of other consistent formal logical theories; after all, we could always consider the new theory obtained by adding the statement as an axiom to the old theory, and this would trivially prove the axiom. But you could never come up with a single consistent formal logical theory that either proves or disproves every statement. Basically, for every consistent formal theory, there will be some statement it can’t prove or disprove, but there will be other consistent formal theories which do prove it and other consistent formal theories which do disprove it. So “unprovable” is always relative to some particular theory; there’s no such thing as just plain unprovable, rather, there’s unprovable from this theory, unprovable from that theory.
Now, as for “true” and “false” … one thing that it’s important to note is that logicians are happy to study theories/systems of axioms without any regard for whether those axioms are actually true, in some sense; you can take any bunch of propositions, and wonder what can be proved from them. You don’t have to limit yourself to only studying the consequences of true theories. Consistency is the only real worry. After all, any consistent theory has some interpretation that makes it true. If I say “There is a number whose square is -1”, that can be interpreted to be true within the context of the complex numbers. But its negation can be interpreted to be true within the context of the real numbers. And this isn’t some special coincidence; whenever you have a consistent theory, you can find some structure and interpretation of the theory into that structure so that it comes out true. Thus we’re free to study all kinds of apparently conflicting theories, knowing that, as long as they’re consistent, they can each be interpreted in some different way so as to be true of different things. So it’s very difficult to take a statement in itself and say “Oh, this statement is true” or “Oh, this statement is false”, because there may be different ways to interpret it that make it come out differently. So, again, there’s no such thing as “true” or “false” in itself; there’s just “true” or “false” relative to a particular interpretation.
So, when we say there are true things which are unprovable, what we mean is, if you fix a particular formal theory and a particular corresponding interpretation, there is something which is true according to the interpretation, but not provable from the theory. And the specific reason it will be not provable from the theory will be that there is another interpretation corresponding to the theory in which it comes out false.
Now, re: set theoretical proofs of propositions about natural numbers, the most frequently discussed system for proving things about natural numbers is called Peano Arithmetic, and the most frequently discussed system for proving things about set theory is called ZFC. As it happens, ZFC proves everything about the natural numbers that PA proves (once you interpret numbers in a suitable way as certain sets), but also proves certain additional number-theoretic statements which PA cannot prove [if PA is consistent, that is. And very few people doubt the consistency of PA, since it amounts to the same thing as doubting the existence of a structure in which natural number arithmetic can be interpreted]. One good example of such a number-theoretic statement which can be proved in ZFC but not PA is Goodstein’s Theorem.
Pretty much all modern mathematicians accept the theory ZFC as the gold standard of acceptable proof, though, so even your average number theorist will consider Goodstein’s Theorem to be true. It tends to be only the logicians who worry about this “Provable in this system, not provable in that system” type of stuff.
As for your last sentence, proving something to be unprovable, without proving it to be false, it’s not that hard to see how this could be done. Consider a logical theory of “numbers”, that says only things like “addition is commutative”, “multiplication distributes over addition”, etc. If we could verify that everything in the system held in the complex numbers, we’d know the theory couldn’t prove “-1 has no square root”, since it has an interpretation under which that statement came out false. However, we wouldn’t right away have a proof from the theory of “-1 has a square root” either, since, for all we know, the theory also has an interpretation under which -1 has no square roots (like maybe the real numbers). What’s provable from a theory is all and only the things which hold in every interpretation of the theory. To show that X can’t be proven from a theory, we just show that there’s some interpretation of the theory where X comes out false. But to prove “NOT X” from the theory is essentially the stronger task of showing that X comes out false in all interpretations of the theory.
Hope that helps.
There are some who hold the view that Goedel incompleteness phenomena force us to turn to physics or various other things to attempt experimental mathematics of this sort, whereby we eventually accept certain mathematical propositions (e.g., the Riemann Hypothesis) as now proved because they seem to fit into our view of the physical world, seem to be true in all the cases we’ve examined, have gone for a long time without being disproved, etc. Notably, Gregory Chaitin holds a view like this. I think it’s a very unpalatable and unnecessary view (I generally disagree with much of what Gregory Chaitin says and does…), but I’ll explain my thoughts on it at a later time.
Perhaps I’m just naively unaware, but what do semantics and syntax have to do with physics? [Though I’ll certainly admit that topology has applications in physics, such as in general relativity.]
That actually was one of several thoughts that I associated with this name before I finally decided to go with it (if I ever get the chance to name a particular type of large cardinal, I plan to go with “an insufferable cardinal”). Along those veins, I was also thinking about indiscernibles from model theory, and a variety of things from outside of math. But mostly, the way it came to be was that I just kept switching from potential username to potential username in my head, jumping from one to another at a moment’s notice, and eventually decided to just settle on any old thing, nothing special, nothing that tried to be clever or sum up my personality or make me stand out. Just an indistinguishable username.
As for my deal, I’m a grad student interested in logic, but I haven’t yet decided what particular area of that to be focusing on. I’m actually in an interdisciplinary department dedicated to Logic itself, affiliated with the mathematics, philosophy, and computer science departments, which is nice, because I have interests in all of those areas. But at the moment, I’m just a generic young logician-in-the-making, with no particular area of focus or expertise.
Oh, and, sorry to triple-post, but just one tiny, minor, pedantic point to make. If you study the natural numbers and only talk about addition and subtraction, never mentioning multiplication, you are working in a system which is too weak for Goedel’s work to apply. There’s a very small and clean system called Presburger Arithmetic (basically Peano Arithmetic with all the stuff about multiplication ripped out) which is in fact complete (with respect to statements that only mention addition): every statement of that form is either proved or disproved from Presburger Arithmetic. You can write up a computer program which, when fed an addition-only arithmetic claim, tells you whether it’s true or false in the natural numbers.
Other similar systems which are too weak for Goedel’s results to kick in and which happen to have nice completeness/decidability properties are natural number arithmetic with multiplication but not addition (Skolem Arithmetic) and real number arithmetic with addition and multiplication, but no ability to single out the natural numbers (sometimes called Tarski Arithmetic, though this name isn’t all that common). You basically need to have enough lying around to speak about natural numbers and addition and multiplication of them to be able to invoke Goedel’s incompleteness theorem (his proof used a technique of encoding statements about [what we now know to be] computer programs as statements about addition and multiplication of natural numbers).
I thought the questioner was looking for examples of other sciences to which advanced mathematics had applications. AFAIK, Topology is an advanced math.
I meant in general. The four colour theorem, Robbin’s algebras etc. and general mechanised mathematics in something like Isabelle or Coq.
Thanks Indistinguishable for your answers, by the way.
Also, both of you, what attracted you to set theory/logic? Isn’t foundations of mathematics considered a bit of a hinterland, an area where most mathematicians are ignorant?
>You’re looking at panel data…
This doesn’t look to me like the same thing. My data have two degrees of freedom associated with the underlying reality, for example position and time, just as time series data do. I have additional information that involves the extra degrees of freedom that are specifically only part of the error.
Fair enough. The way I read it, the sort of applications I was thinking about were more of the sort that men in white lab coats would worry about, but that does seem overly narrow. And, yeah, I’d say topology is an advanced math, but then, I think, when people say “advanced math”, they mean pretty much anything outside of the normal linear sequence from high school into calculus. “Advanced math” would be all the parts of math that most people aren’t exposed to except if they actively take a particular interest in it as mathematicians.
Sure, most mathematicians are ignorant of foundations, but then, almost any specific field you choose to work in, whether in math or otherwise, most people in the world will be largely ignorant of. You can’t really hope for everyone to share your particular passions.
It’s tricky to effectively communicate exactly why I’m interested in the formal study of logic. I think I can’t do anything for too long without feeling the urge to go meta. And I’d been interested in math for a long time, having (back in the day) had some romantic notion of it as the study of necessary, perfectly knowable truths. I thought about math a lot. So, eventually, I took the “meta” turn, and started thinking a lot about thinking a lot about math. Thinking about just what it is that math accomplishes and how it goes about accomplishing it. And math is really just the exemplar of all rigorous thought, just the best means to the end of discovering robust truths. So I started thinking more and more about the foundations of thought, knowledge, proof, etc., both in and outside of the context of math. I think anyone can understand that there are a lot of matters in there of general intellectual interest, no? Like you could take issues from there, and strike up a conversation with a layman with no particular formal background or technical experience in discussing them, and find that, nonetheless, the layman had strong opinions on the matter. Everyone does a lot of thinking, believing, reasoning, etc., and presumably has some interest, at some level, in the process. It all just strikes me as particularly fascinating.
It’s much easier to explain how I got interested in computer science. If you track down any CS major, and ask them how they first got onto that path, they’ll give you the same answer, no matter what their current field of specialty, no matter how far removed their interests ever got to be:
“I wanted to make videogames.”