Does Every Math Concept Have A Proof?

Seems trivial, perhaps, but when (presumably) Newton invented Calculus, for example…how did he prove that every derivative is the instanteous slope of a curve (for any specific point on the curve) and the intergal is area under the curve? Besides a few basic examples where is seems intuitive that it works, how can be proven to be true for all cases?

{I can only WAG that the answer may reside the mathematical definition of a derivative, for example, which many will recall is (a) the more rigorous approach for finding a derivative and (b) is purely an academic exercise when compared to the more practical way of finding a derivate taught soon thereafter in Calc I.}

  • Jinx

My (admittedly limited) understanding is that it’s the opposite; mathematicians have demonstrated that it’s impossible to develop a completely provable mathematical system - at some point every system requires some unproven assumptions.

No.

First of all, there are axioms and defintions, which by definition (ow, stop throwing things!), aren’t proven. Axioms are things assumed to be true and defintions are just that. The derivative falls into the latter category. The derivative is the slope of the tangent line. If you calculate the derivative and it’s not, you calculated it wrong. There are proofs for why the power rule and such work, but I don’t want to go into them now.

As Little Nemo said, there are some things in any mathematical model that are true, but can’t be proven.

I would give more examples of things, but I don’t know that kind of stuff off the top of my head and I don’t want to look it up now.

Then there are conjectures, which are believed to be true, but have yet to be proven true or false.

One of the more tantalizing ones (because it seems so simple at first) is Goldbach’s Conjecture: Every even number greater than 2 is the sum of two primes. Nobody has ever found an example that proves it wrong, but nobody has managed to prove that it’s always true, either.

IIRC, a mathematical statement without a proof is ahypothesis. The self-consistent premises which form a set by which one makes all other mathematical conclusions are axioms.

Non-Euclidean geometries.

Because Euclid’s Fifth couldn’t be proven.

Gödel’s incompleteness theorems proved that mathematical systems require unprovable statements.

This is mostly what other posters have said, but (I hope) a bit more comprehensive and a bit clearer.

Every mathematical theory has a certain set of statements that we agree are true which are called axioms (or more rarely postulates). Every theory also has a set of rules for deriving new true statements from old true statements which are called rules of inference. If there’s a sequence of statements A[sub]1[/sub], A[sub]2[/sub], …, A[sub]n[/sub] where every A[sub]i[/sub] is either an axiom or is a consequence of applying the rules of inference to some of the previous statements in the sequence, then A[sub]n[/sub] is a theorem of your theory, and the sequence A[sub]1[/sub], A[sub]2[/sub], …, A[sub]n[/sub] is the proof of that theorem.

A theory is said to be consistent if there is some statement which is not a theorem, and complete if every true statement is a theorem. Completeness and consistency are both desirable properties, and a complete consistent theory would be a very nice thing to have indeed. In the early 1930s, Kurt Gödel showed that any theory which satisfies a certain set of conditions is either incomplete or inconsistent. In particular, the theory describing arithmetic over the natural numbers meets those conditions and therefore can’t be complete and consistent.

However, not every theory meets those conditions, and so there are some where every true statement is provable and every false statement is not. IIRC, the theory of addition over the natural numbers is one such theory (but don’t quote me on that).

**ultrafilter ** has done a nice job at boiling down the issue of completeness and consistency (which I couldn’t do; IANAM, even though I read *Gödel, Escher, Bach * :slight_smile: ) but I think we have veered off from the OP. There can be unprovable statements but they do not become parts of mathematics. We do not simply accept things on faith in mathematics, except the axioms (my memory of this is fuzzy but I think things like “a number is equal to itself” may be an axiom).

The first section of Newton’s principia deals with derivatives and integrals. Newton uses geometric arguments throughout (interestingly, he begins with integrals then moves to derivatives), so he is assuming all the continuous notions of Euclidean geometry.

Newton’s definition of a limit–while not mathematically rigorous–is pretty close to the intuitive one most Calculus students need:

Near the end he address one concern with the fact that he is describing ratios of quantities that themselves disappear:

Well, it’s nearly impossible to prove that it’s false. Therefore, it’s true. QED.

Thank you, thank you, just send the Fields medal sometime next week :stuck_out_tongue:

One thing I want to point out: There are still lots of mathematics done based around unprovable statements.

One example in particular: The continuum hypothesis (CH) (which says that the cardinality of the reals is equal to aleph-one (or omega-one, if you prefer)) is independent of ZFC set theory. This means you can add CH (or, conversely, not CH) as an axiom to ZFC. Lots of mathematics have been done using both CH and its negation.

There are also many other axioms (also independent of ZFC) which either strengthen CH (the generalized continuum hypothesis) or weaken CH (Martin’s axiom). Lots of mathematics are done using these axioms, too.

Ideally, you would hope that by playing around with all these various axioms that you would find some results that are either intuitively or aesthetically pleasing. In such a case, it’s possible that in the future there may be a new axiom adopted by consensus which resolves CH.

Before you can prove that, for example, the integral is the area under the curve, you have to define precisely what you mean by “the integral” and “the area under the curve” and then show that those two things are equal. (Or, you could just define the area under the curve to be the integral, and then show that, under such a definition, it fits with our intuitive idea of area and has the properties we’d want area to have.) In other words, your WAG is basically right, it seems to me.

In the early days of calculus (i.e. the days of Newton and Leibniz, and at least a generation or two afterwards), the subject wasn’t on as firm a logical foundation as it is today. Mathematicians weren’t as interested in what could be proved rigorously as they were in what worked and could be used to solve problems.

Ultrafilter wrote:

(bolding mine) That doesn’t match the colloquial definition of “consistent”. Nor does it match what Wikipedia says (“In mathematical logic, a formal system is consistent if it does not contain a contradiction…”, which is more or less the colloquial definition). Is there a word or two missing from what you wrote, or else, can you expound on why that’s an equivalent definition?

No, there are mathematical concepts that do not have a proof: definitions. It simply makes no sense to try and prove (or disprove) a definition; they’re just agreements on the exact meanings of certain words.

It seems to me that you’re under the impression that the concept of a derivative is something that was just laying around and that, playing around with the concept, he came to the conclusion that “hey, a derivative just means the slope of the funcion”. And now you’re asking for his proof.

Well, in fact, it was just the other way around. Newton somehow became interested in the notion of slope and wanted to make rigorous statements about it. So he came up with a formal definition that captured his intuitions about this notion and called it at derivative. Nothing to prove there, it’s still just a definition. Apparently, this definition captures most peoples intuition about slope, since this definition became very popular. Using the definition, he then proceeded to proof a bunch of rules about it, derivative of a polynomial, product rule, etc. . But these rules are nothing more than tools, making it easier to use derivatives. The heart of the matter still is that derivatives are about slope by definition.

So I think there are two subtle errors in your parenthesized statement: (a) the formal definition is not a way of finding the derivative, it’s a way of agreeing on what it is, and (b) the formal definition is not a purely academic exercise but an essential part of the math involved. In practice, using the rules is much easier, just like using a calculator is easier than basic algebra. But forgetting the definition is as big an error as forgetting basic algebra.

Regarding integrals, well it’s just about the same story. We’re interested in the intuive notion of the area under the curve and come up with a definition that captures this notion (something to do with Riemann-sums, but some introductionary texts skip this part). Using this definition of an integral, it then becomes possibly to prove the familiar rule regarding the relation between the integral of an function and the primitive of said function.

This should answer your question, but please note that statements about Newton probably have no historical merit whatsoever.

To expand on this, while the concept of the derivative can’t be proved, the formulas for differentiating various types of functions, or integrating them, can be proved.

The derivative is pretty much the instantaneous slope of a curve by definition.

Take any two points on a set of cartesian coordinates, defined by the ordered pairs (x1, y1) and (x2, y2).

The slope of the straight line that connects these two points is given by

m = deltaY/deltaX = (y2-y1)/(x2-x1)

Now, assume that the two points are not just randomly selected points, but in fact lie on the graph of some function. We can see a number of things:

[ul]
[li]y1 is f(x1)[/li][li]x2 is x1 plus some interval h, so x2 is x1+h[/li][li]Therefore, y2 = f(x2) = f(x1+h)[/li][/ul]

So if, to drop our “subscripts” we let

x1 = x,
y1 = f(x)
x2 = x+h

and

y2 = f(x+h)
our points are now (x, x+h) and (f(x), f(x+h)).

Plugging these into the slope formula, we get

m = (f(x+h) - f(x)) / ((x+h) - x)

or (f(x+h) - f(x)) / h

So this slope represents the slope of any straight line connecting any two points that lie on the graph of the function.

If we want to figure out which way the graph is “going” at any particular point, we want the straight-line slope for two points on the graph that lie so close together that the distance is negligible. In other words, we want h, the interval that separates the x coordinates of the points, to approach zero, on the realization that, being a function, the y values would similarly converge in most cases, bringing the points on the graph vanishingly close together.

So we define this instantaneous slope as

d f(x)/dx = lim(h–>0) of (f(x+h) - f(x)) / h

We call that the derivative, and voila!

It looks to me like they’re equivalent:

First, suppose the system has no contradictions. (Here I take “contradiction” to mean there is some statement A for which both A and not A are theorems). Now take some theorem A. Since the system has no contradictions, not A is not a theorem, and so the system is consistent by ultrafilter’s definition.

Now let the system be consistent by ultrafilter’s definition. Proof by contradiction: Suppose the system has a contradiction–there is a statement A for which both A and not A are theorems.

Let B be any statement. We can prove B as follows (by contradiction):

Assume not B.

Therefore A.

Therefore not A.

Contradiction, so our assumption must be false.

Therefore B.

Since B was arbitrary, every statement is a theorem.

But this contradicts that the system is consistent by ultrafilter’s definition, so our supposition (that the system has a contradiction) must be false. Therefore, the system has no contradiction.

…And so they seem to be equivalent. Though I’ll admit I may be missing some subtlety of general formal systems–they’re not something I deal with often.

Cabbage, you’re assuming that the system has some rule by which the statement “A and not A” is tautologically false. This is a useful rule for a formal system to have, but one could construct a system without such a rule. One can, in fact, construct a system which doesn’t have a concept of “not” at all. In such a non-not system, the “no contradictions” definition of “consistent” wouldn’t be meaningful, but the “exists unprovable statement” definition would still be meaningful.

Gotcha. I had a feeling I was overlooking something like that.