1+1 = 2 ?

Except that axioms are provable. The proof of an axiom consists of the statement of the axiom, and commenting that it is, in fact, an axiom.

There are, however, statements which cannot be proven. Gödel famously proved that in any self-consistent system sophisticated enough to encompass arithmetic, there must exist statements which are true, and yet which cannot be proven within that system. For instance, it is possible to express in the notation of such a system a statement G equivalent to “This statement cannot be proven within this system”. If statement G is true, then by its own truth, we can deduce that G cannot be proven. If, on the other hand, G is false, then it cannot be proven by virtue of being false, since a self-consistent system cannot prove a falsehood. Therefore, either way G cannot be proven. But that’s exactly what G is saying: G says that it can’t be proven, and in fact, it can’t be. So G is true. Ergo, there exists at least one statement in any given system which is true, but which cannot be proven in that system.

Now, that one’s a pretty pathological case, and aside from exposing a quirk of mathematics, it doesn’t really do much for us. But there are more useful examples. For instance, in geometry, exactly one of the following statements is true:

The sum of the angles of a triangle is exactly 180 degrees
The sum of the angles of a triangle is less than 180 degrees
The sum of the angles of a triangle is greater than 180 degrees

One of those three must be true, and the other two must be false. Which one’s the true one? There’s no mathematical way to know. Unless you take one of those statements, or its equivalent, as an axiom, you can’t prove it. You can, however, answer the question scientifically (or at least attempt to do so), by measuring the angles of real triangles.

You’re omitting things here. If you want to use “.9999… approaches 1”, then you also have to use “.3333… approaches 1/3”. If you want to use “1/3 =.3333…”, then you also have to use “1 = .9999…”

—bats eyes innocently—

I got corrected on this before, so let me share the wisdom with you: in the standard construction, N and Q and R are all different types, but there’s no reason why you couldn’t construct R out of whole cloth and still get all the right properties. So in that case, that all becomes a non-issue.

I got corrected on this before, so let me share the wisdom with you: in the standard construction, N and Q and R are all different types, but there’s no reason why you couldn’t construct R out of whole cloth and still get all the right properties. So in that case, that all becomes a non-issue.

thus: “There are 10 kinds of people in the world; those who understand binary and those who don’t.”

(Man, I gotta find a t-shirt with that on it…)

My question was a simple one, and I think that most of the answers handled it in a brilliant way, but to some respect none of you handled it. Or maybe I am looking for something that is not there. It feels like that soar on the top of your mouth that would heal if you could stop tounging it, but you can’t.

Math has spawned most of the technical achivements in history, but that does not prove anything.

The reason why I don’t really think maths is the right way to go to prove this, at first glanze silly question, is that it is the very basis of maths. The only one who came close to any kind of answer was Gilles and Bathas, since they understood my gut-feeling approach to the problem. I have done more maths than average people, but I feel some kind of fundamental problem with 1+1=2. Why?

I think that no one can please me, but I am so glad so many tried. The few halfwits that tried to joke it away I understand, and I can’t think of any other place were I would pose such a question, than here. I guess I wish there was a word or sentence so elegant that it would sum it all up. But I guess that it is not possible to be more elegant than:

1+1=2

All you have to do is ask

It’s not. The basis of math is either set theory or category theory, depending on who you talk to. Everything else is defined in terms of those objects.

It may be the first thing you learn, but don’t confuse that with it being fundamental. As Russell and Whitehead showed, arithmetic is actually pretty high-level stuff.

  • I’m slightly confused… it doesn’t look like anyone called Bathas has posted or been referred to in this thread.

  • I’m sorry that those of us who might have taken the more technical approach weren’t able to address your intuitive discomfort, but I can’t think of anything more to say if you can’t explain your problem a bit more clearly.

  • Hi, Opal! (Yes, I know I’m cheating. I don’t care.)

  • I agree with those who said that math is by its nature abstract, with reasoning and axioms being the only true test of what is ‘proven’ within the system. Math is useful in modelling the real world, but that in and of itself doesn’t make an equation true or false. It just makes it useful :wink:

In this case, 1 + 1 + 1 = 4 :slight_smile:

I must have misread a nickname, so it’s whoever is closer to that then :wink:

I can’t formulate a proper question, because what I looked for was discourse, and in this there might be some kind of answer. But I think if you read the last sentence of my last post, it sort of struck me what the real issue was. Some things are better left unexplained, since they are inheriently beautiful and elengant. Or something like that. G’nite

I’m not sure you’re really being quite clear on what you’re looking for. This is my first post in this thread, and I’ll admit I haven’t read it in detail. I did read Giles post, since you referenced it (I don’t know who you’re referring to as “Bathas”). At any rate, I agree with Giles:

To ask for “a proof that 1+1=2” is meaningless until you’ve defined the semantic content of the symbols, “1+1=2”.

There are various correct ways of defining this (and by “correct” I mean they are all both logically consistent, for one thing, and that they coincide with our intuitive notion of what “1+1=2” means).
Russell and Whitehead did it one way, I’m sure others have done it other ways.

If you’re thinking in terms of the history of math, I’ll first give a caveat that I’m no expert in the history of math, but I also feel pretty confident in saying that the original definition “1+1=2” was an intuitive one, and that it’s true by definition.

The individual symbols, “1”, “2”, “+”, and “=” were taken to express (intuitively) two different quantities, an operation of “accumulating” (for lack of a better description off the top of my head), and an idea of “sameness”, respectively. I’m being very vague here, I know, but historically the foundations of math are intuitive and vague (by modern standards of rigor in math).

Given these intuitive definitions, 1+1=2 is a self evident truth: “2” is defined to represent the quantity (i.e., is “the same as”, i.e., “=”) when a single object (whose quantity is represented by “1”) “accumulates” ("+") with another single object (“1”).

In a modern, rigorous context, we need formal definitions of these symbols. As I mentioned, there are various ways of doing this so that we may then formally prove that 1+1=2.

Nitpick: With discrete objects & natural numbers, this is true. With estimated quantities, not so much. I’m not remembering the formal way to put this, but…
1.0 + 1.0 = 2.0 with a possible deviation of .099 = 1.90 to 2.10 exclusive.
1.3 + 1.4 = 2.7 with a possible deviation of .099 = 2.60 to 2.80 exclusive.
0.6 + 0.9 = 1.5 with a possible deviation of .099 = 1.40 to 1.60 exclusive.
1 + 1 = 2 with a possible deviation of .99 = 1 to 3 exclusive.

So, with discrete units, 1 + 1 = 2 by definition, but with imprecise quantities, 1 + 1 = 2 on average.

sverresverre: You are approaching the entire field of mathematics with the wrong assumptions, and you are wondering why you can’t get the answers you want to your questions. Your question has been answered just fine multiple times, but your reactions to those answers indicate to me that you aren’t getting something very fundamental about the distinction between mathematics and science.

Try this on for size: Mathematics is just a game and in a game, things mean what we want them to mean. We can prove things in mathematics by shuffling symbols around in predefined ways because we have defined the concept of proof and all of those symbol-shuffling techniques beforehand, and we can agree on the fundamentals of the symbol-shuffling game. There is no reason why mathematics has to match up with anything whatsoever in the real world. It is, after all, just a game.

Except it does. We’ve been jiggering the rules of the game long enough we’ve come up with certain ways of playing the game that yield useful predictions about how the real world will behave in certain situations. This is so effective that great minds have pondered why it should be so unreasonably effective. This is a paper on the unreasonable effectiveness of mathematics in the natural sciences. I think your skepticism comes from the fact that we’ve made up this game and somehow get to apply it to the real world, and that it’s somehow very deeply intertwined with science to the point where you cannot reasonably discuss fields like particle physics without throwing around some pretty serious equations. That unreasonable effectiveness is why: Mathematics has graduated from being a game to being a language, and it is a damned useful one in all of the hard sciences. (In fact, a ‘hard’ science is one where the predictions and results can be expressed numerically and as equations.)

I’m not even sure how to ask this question, but I’ll try.

I accept the fact that .999…=1. Which means, the numbers in the series “.9, .99, .999, …” are all <1, but gradually approach 1, and ultimately (at infinity) =1.

But there’s another series of numbers, “1.1, 1.01, 1.001, …” which are all >1, but gradually approach 1, and ultimately (at infinity) =1.

If the <1 series is symbolized by the number “.999…”, how do we symbolize the >1 series? Somehow, “1.000…” doesn’t seem right. My brain is trying to put a “1” at the end, but obviously that’s not right either.

Help.

Oops, that was supposed to be a new thread. :smack:

I’m not quite **that **insane. Yet.

panache45, probably the simplest way to express such a series would be simply as:

lim 1 + 10[sup]-n[/sup], where the limit is as n goes to infinity.

We sometimes write “0.999…” to denote the limit of the .9, .99,… sequence, but there is no analogous way of writing this sequence.

The only way to write the limit as a decimal is as 1 or 0.999…; these are the only two decimal representations of 1. There is no decimal representation “1.000…1” (with a 1 after infinitely many zeros), it wouldn’t make sense. There is no “omegath” digit in a decimal expansion (for simplicity, “omegath” digit can be taken to mean a digit with infinitely many digits preceding it)

How about 2 - 0.999… ?

Actually, I think I see your problem. Evidently I wasn’t clear in my first post.

Mathematics doesn’t say anything about the real world. Mathematics can go from the Peano axioms to “1+1=2”. Mathematics then says that for any model satisfying the Peano axioms the model also satisfies “1+1=2”. Mathematics doesn’t say anything about real-world examples of such models.

If you mean this as a real, physical question, ask a physicist. If you mean to ask the (deeper, unanswered) question of why mathematics is so “unreasonably effective” at describing the physical world, ask a philosopher. Mathematics has nothing to say.