Ask a mathematician

:stuck_out_tongue:

I think you have a point :smiley:

Thank you sir! That is a nice example of what I was thinking about. (Of course, now I want to read up on ordinals so that I can better understand the proof at the link you included.) If I understand correctly, you can indeed prove something is unprovable, in a complex enough logic system, and yet prove it from another system. I was thinking that if I proved it one system, that proof would be somehow “reducible” to the first system. (For example, that natural numbers could be represented as certain sets seems intuitive.) Of course, now I’m tempted to look up the proof that Goostein’s Thm is unprovable within Peano Arithmetic.

Now you reduce me to wanting to have a beer with you and a long discussion about such things. And of course, I want to examine Gregory Chaitin’s views to see if I agree or disagree…

One last question. At least, I think its the last. If you can indeed prove that something is unprovable within a given system, is it possible to prove that something is provable, but that the proof would require an infinite number of steps, and hence be “practically unprovable”? It would seem it must be, because it would that if one could show something is not computable because it would take an infinite number of steps, then there would be some interpretation that maps incomputability to a “practically unprovable”.

Whippersnapper! That was not the case for CS majors when I was in school, because there were no videogames! Actually, there wasn’t any kind of “video” (crt, lcd, etc.) associated with computers.

Yeah, I thought of putting some sort of age disclaimer on it, but figured “Hey, the old-timers probably got the urge later on too.”

The sort of proof systems to which the classic results (e.g., Goedel’s incompleteness theorems) apply are ones in which proofs are required to have only finitely many steps. Generally, if you start allowing non-finitary rules of inference, it becomes possible to achieve completeness. For example, consider Goodstein’s theorem. It says “For every natural number n, the Goodstein sequence starting at n eventually terminates.” Now, in every instance of this for a particular number n, it is possible to provide a proof in Peano Arithmetic (by actually constructing the full terminating sequence). Thus, PA can prove “The Goodstein sequence of 0 terminates”, “The Goodstein sequence of 1 terminates”, “The Goodstein sequence of 2 terminates”, “The Goodstein sequence of 3 terminates”, etc., all the way up. If we had infinitary inference, we could jump from those infinitely many individual steps to Goodstein’s theorem. However, PA (like most proof systems that get studied, and all proof systems that humans can actually use) does not have any sort of infinitary inference, and is in fact unable to prove Goodstein’s theorem in any way.

But let’s suppose we modify what you said a little, and replace “infinite number of steps” with “hobajillion many steps”, where hobajillion is finite but far larger than practical. Then there are plenty of “feasibility” results concerning proof lengths and such which touch on questions about this. One, in particular, seems the sort of thing you’re looking for. It’s very easy and cute to modify Goedel’s incompleteness proof, to get the following result: given any (consistent, strong enough, formal, can talk about natural numbers, etc.) theory T, there is some formula X(n) such that two things happen:

  1. T can prove “T can prove X(n) for each particular n”. The number of steps it takes to do this is relatively small.
  2. For each particular n, T can indeed prove X(n), but the number of steps it takes to do so is greater than n.

So, then, by considering the statement X(hobajillion), we have basically what you’re asking about: there is a small proof in T of “T proves X(hobajillion)”, but there are no small proofs in T of X(hobajillion). [This result is due to Rohit Parikh, who basically founded the formal study of feasibility].

Interesting, huh?

One last clarification on unprovability results. At least, I think it’s the last, though I’m always game. This post isn’t as interesting as the previous one, with its cute conclusion; this post is just designed to clear away some potential misconceptions and perhaps increase understanding.

It’s important to be clear that the fact that ZFC can prove something which PA can’t has nothing to do with the fact that ZFC talks about sets while PA talks about natural number arithmetic. Remember that PA is not the same thing as number theory, PA is just one particular collection of axioms of number theory. One can easily come up with purely number-theoretic systems which are stronger than PA (for example, the system PA + Goodstein’s Theorem). Or number-theoretic systems which are weaker than PA, or just plain different. And there are purely set-theoretic systems which are weaker than, stronger than, or just plain different from ZFC. The only problem is, if you want someone to believe a fact because you’ve proved it from your pet system T, they’re going to ask you why they should trust T. At that point, you have to justify the axioms of T to them. The axioms of PA are all pretty simple things that almost everyone will agree are straightforwardly, obviously true of natural number arithmetic (for example, one axiom of PA is “For all x . x + 0 = x”), but stronger systems of number-theory could toss in hairier axioms that aren’t so clearly true of natural number arithmetic. If you take Goodstein’s Theorem, or Fermat’s Last Theorem, or some crazy hairy thing to be an axiom of your number-theoretic system, and you want people to accept your results, you’ll need to come up with some convincing rationale for those axioms.

As it happens, a lot of people believe in a set-theoretical universe containing the natural numbers, and find the axioms of ZFC clearly, obviously, intuitively true of this set-theoretical universe. And so, for many people, you could convince them of Goodstein’s Theorem’s truth by taking the detour through ZFC. But it’s not the mere fact that the detour involves looking at set theory that gives it its power; rather, it’s the fact that people are willing to grant you certain very strong assumptions if you phrase them as about sets (assumptions like “There is an infinite set” and “Every set of sets has a union” which are contained in ZFC) which they don’t so clearly see as immediately reasonable when they’re phrased in an essentially equivalent, but opaque, manner as about number theory.

As for reducing proofs in system A to proofs in system B, the way it works is this: remember (as I tried to illustrate before with the systems of complex and real numbers) that a theory T can prove a fact X if and only if X is true in every structure which satisfies the axioms of T. [This relates a syntactic definition of proof (“Valid proofs are finite sequences of lines which proceed from each other by one of the following rules…”) with the natural semantic definition of logical consequence. That the two are equivalent is called soundness and completeness; as an amusing sidenote, the completeness part of this was established in Goedel’s doctoral thesis, and, yes, is called Goedel’s Completeness Theorem, the word “completeness” being used here in a different sense than in Goedel’s Incompleteness Theorem]. This gives us something of a reduction theorem: If ZFC proves something, and that something is actually true in every model of the axioms of PA, then PA can prove it too (though PA may only have a much longer proof). But if ZFC can prove something and PA can’t, then we know that there is a structure satisfying the axioms of PA (but not those of ZFC) where the fact fails to be true.

Thus, since PA doesn’t prove Goodstein’s Theorem, there is a structure which is a lot like the natural numbers, in that it still has x+0 = x and 0 as a minimum element and the Chinese Remainder Theorem and the Euler-Fermat Theorem and everything else provable from PA, but in which Goodstein’s Theorem fails. This structure basically has lots of extra elements above all the standard natural numbers, and for some of these extra elements the structure sees no terminating Goodstein sequence. The fact that ZFC does prove Goodstein’s Theorem tells us that this crazy structure can’t be nicely extended into something that satisfies all the axioms of ZFC. That these kind of structures exist is precisely the reason why proofs can’t always be “reduced” from one system to another.

And now for something a fair bit simpler.

I don’t have any respectably real graduate work yet, but for my own investigations and dithering around, I’m fond of Haskell. I used to have a copy of Mathematica and used it occasionally for math problems of the more traditional sort that were pretty far from my area, basically just as a more powerful and conveniently-accessed substitute for my TI-89; however, I never got to be all that well-acquainted with it, and no longer have any access to it.

I might be a little late here in asking a question to Jamaika a jamaikaiaké. However, in Japan, they recently began broadcasts of the FOX television program Numbers. Have you seen it? If so, what do you think of the math presented in that program to solve crimes?

Maybe I can help…

Okay, so based on Jamaika’s answers, you have at least a decent understand of limits.

Sometimes math teachers like to tell the story of how a theory or technique was developed. So the story you hear sometimes was that in Isaac Newton’s time, there were a couple of problems that puzzled everyone. The answer to the first one is differentiation, and the answer to the second one is integration. And it turns out the two are related.

The first problem is that of trying to find the slope of a tangent line to the graph of a function. So, imagine you have a function x(t) which is the position of a car after t seconds. Imagine it on a track which has markers every meter. It starts at x=0, and moves smoothly (i.e. without slowing down or speeding up) to x=10, over the course of 10 seconds. If you were to graph x(t) with t on the x-axis and x on the y-axis, it would be a straight line connecting (0,0) to (10,10).

The slope of the line (remember slope? rise over run?) is 1, so the car moved at a rate of 1 m/s.

Now, what if the car starts from a dead stop, gradually accelerates to some higher speed, say 2 m/s, then gradually slows down to a stop again, still covering 10 m in 10 s. The average speed is still 1 m/s, but it’s also going much slower than that at some points, and much faster at others. The graph is some sort of S-shape. So let’s say we want to know the instantaneous speed at t=3, the speed that the speedometer would say at that moment. We draw a line which is tangent to the graph at t=3 and find the slope of that line. Such as the red line in this picture: Derivative - Wikipedia

That slope is the DERIVATIVE of x(t) at t=3. The way we find that number, other than just drawing a line and measuring the slope the best we can, is called DIFFERENTIATION and it involves limits, which is part of the reason Jamaika brought up the concept of limits.

Does this make sense?

Who would do better on Deal or No Deal, Einstein or Scharenberg?

Not really a math question, but this seems like a thread where someone might know the answer …

A few months ago I read an article about some Russian math guy who was very close to solving some particularly significant math problem. The quirk is, as far as I could tell, that he’s a slightly weird former math student living with his mom, as opposed to an actual professor.

I would love to find the article again, but can’t remember the name of the guy or the name of the problem.

Does this sound at all familiar to anyone?

The mathematician in question is Grigori Perelman, and he solved the Pointcare Conjecture.

Thank you!

It’s the Poincaré Conjecture. Perelman has been a professor in the recent past, and he would be one now if he wanted to be. He just decided, for reasons of his own, to leave the math community.

To the first, I doubt it. I program, but I got into it from physics. My CS coworkers of the same age, were old enough to recognize sweatshops by the time video games came out.

Absolutely to the second. The following post was interesting, also, btw.

All right so Calculus gives us functions to figure out the speed and distance at a set time if speed is not constant…and uhm…also lets us know the possible fastest and slowest speeds, and longest and shortest distances?

So like…what does this mean to me? Helps figure out where stars are and where they’re going and how long they’ve been there or something?

I think a lot of times my “problem” with math is the “how does this apply in the real world?” question. My boyfriend was getting all excited about some sort of contest for finding special prime numbers the other day, and when I asked him why prime numbers are important and why this particular set of prime numbers were important all he could come up with was “well I could get $50k if I found one.” :slight_smile: (he is not a mathematician, just a nerd)

How about a short list of practical uses for calculus? I think learning how to do calculus is beyond me, but knowing why it’s important with real-world examples would be most helpful.

This means that calculus can be used to study anything that’s changing (over time, or in relation to something else that’s also changing). The classic example is a moving object (moving = its position is changing), but you can also study changing populations, changing prices, changing pollution levels, etc. You can also study geometric curves and surfaces, like a roller coaster or a mountain range: as you go from one end to the other, your height is changing.

Note: If something’s moving at a constant rate (like if you’re driving at a steady 60 miles per hour, or the population’s growing at a steady 200 people per month), you don’t need calculus, because it’s regular and predictable enough that you can figure out everything you need to know without it. Very roughly, calculus treats less regular motion by chopping it up into infinitely small intervals and treating it as if the motion were occuring at a constant rate within each of those intervals; or as if you thought of a curve as infinitely many, infinitely short straight line segments all strung together.

I’ve seen at least one textbook that describes calculus as “the mathematics of change and motion.” A pure mathematician might quibble with this—it is possible to study calculus theoretically without ever referring to anything changing or moving—but it gives you a good idea of how it has historically been thought of and used.

Here’s a simple problem that needs calculus to be solved: Suppose you’re looking to invest $10000 for 20 years, and you’re given a choice of two bank accounts. Account 1 earns simple interest at a rate of 8% per year, while account 2 earns compound interest at a rate of 5% per year. If you have the option of switching your money out at any given time, what’s the most you can end up with after 20 years?

So, I haven’t left or anything, it’s just that Indescribable has done so well answering questions. Leave well enough alone, I suppose :slight_smile:

Oh, and Numb3rs is silly, though I’ve only seen 1 or 2 episodes.

Who’s Scharenberg?

Here’s why I brought up limits. Even if you aren’t proficient at computing limits, it helps to understand what they are, because both derivatives and integrals are defined using limits.

Let’s consider the derivative. The derivative is meant to be the slope of a graph at a single point. But the slope is only defined for two points on a graph:

Slope: Given (x,f(x)) and (y,f(y)) two points on a graph, the slope between them is (f(y)-f(x))/(y-x) or “rise over run”.
So how can we get this concept for just a single point? Well, if we fix one of the points, and let the other one get infinitely close to the fixed one, the we will get something like the slope at first point. This is where limits come in. A limit allows us to formalize the notion of “let the other one get infinitely close to the fixed one.”

In other words, the derivative of the function f at the point (x,f(x)) is:

The limit as y approaches x of (f(y)-f(x))/(y-x)
As for applications, replace every instance of “slope” above with “velocity.” How fast was I running? You measure how fast I ran 1 meter, say 2 seconds. Then you would say that I averaged 1/2 meters per second. How fast was I running at a given instance? Well, then you have to take infinitely many measurements at take the limit of the average velocities as the time interval approaches zero. Now, of course this can’t really be done (take infinitely many measurements?) but maybe this gives you an idea of what kinds of problems calculus tackles. Actually, ultrafilter is probably better at outlining direct applications of calculus. I’ve really gone off the theoretical cliff, and never knew all that much physics to begin with…