.999 = 1?

On this message board observe, erik150x
Hath new ideas (so he suspects)
And as the Dopers works patiently to right 'em
Erik, this thread, and 0.999… proceed ad infinitum

ETA: Apologies to Swift . . .

Hey! In the last few days, while I wasn’t paying much attention to this thread, it has WAY surpassed the “My Problems With Relativity” thread in number of posts! There’s only 598 posts over there.

Looks like you jinxed it…

Are we still discussing whether the limit theorem is valid?

Sheesh!

The limit theorem of calculus has been formally proven.

Anyway, here it is: Fundamental theorem of calculus - Wikipedia

Sure, but that’s valid for the standard number theory that mathematicians use. Erik150x, if I’m correctly understanding him, seems to be arguing that he has a different formal system. Others are pointing out that his system doesn’t seem to have the desirable property of self-consistency, but he’s undeterred.

Actually we’re complaining that it’s not specified enough to determine its consistency.

erik150x is a self-styled sage and prophet, I have no doubt he will deliver unto us an even more perfect system of mathematics and formal logic that will validate his belief (which appears utterly unfounded only to the laymen) that 0.999… != 1.

Erik150x’s question I believe is really rooted in psychology with math and not so much the formalism of math. (Yes, there’s been a ton of discussion dissecting in excruciating detail the alternative notation. The ultimate motivation of all this was to find a notation that “feels” better to a person’s particular intuition with infinity.)

For some, there’s something that feels wrong about .999… = 1
“It’s not intuitive.”
“Yes, yes, yes, I see that there’s infinite digits but you can always place a digit before which means there’s always a tiny tiny tiny gap between .999…? and 1.0”

It’s similar to other concepts that have no obvious “physical” real world mapping that meet psychological resistance:

…, -4, -3, -2, -1 negative numbers – how can one have “negative” apples? Is it “antimatter” apples? It’s not “intuitive”

i = sqrt(-1) imaginary numbers – how can one have “imaginary” numbers? Just because it is a convenient concocted answer for cubic equations doesn’t mean it’s “intuitive”

4-dimensional, 10-dimensional, 1000-dimensional hyperspace – ok, there’s 3-dimensions of height, width, depth… what’s this nonsense of 1000 dimensions? It makes no sense…

x -> infinity … limits, – unsatisfying constructions using infinity. Yes, we concede we use these rules with success and consistency but at its core, it feels “non-intuitive” and hand waving.

All those “strange” concepts are the same flavor of his question I believe.

One has to just let go of the idea that mathematics provides psychological satisfaction and common sense truths. Once past that hurdle, you only care about consistency, provability, etc. After that, one sometimes also has a bonus of psychological satisfaction. For the hardcore mathematicians, the self-consistency aspect is the only thing necessary to derive psychological satisfaction.

I like John von Neumann’s quote: “We never understand math, we just get used to it.”

I think this is an excellent point. Some concepts are a bit like Zen. Until you reach the point where you can experience it for yourself, you simply have to have faith that those to whom you have entrusted your education know better than you do. Personally I’m pretty comfortable with the concept of infinity, but there are any number of other concepts that still give me a migraine.

IANAMathemetician nor a philosopher, but on definitions of formal systems:
Can I get away with saying "in my system 2+2=5, deal with it,” and see what the best and brightest mathemations come up with?

I am certain this has been discussed forever (even by the Greeks?), and posts won’t cover the issue. But for me some way stations would be helpful.

This is a broadening of the current conclusions of this thread, I believe.

Leo

You’d need to say more about your formal system. You can have “2+2=5”; an easy case would be where “5” meant what “4” normally does. But you could also, say, have arithmetic with a largest number, such that at some point the successor of n = n. In such a case, it could be the case that 4=5, and that 2+2=4, and so 2+2=5. Of course, you won’t have the same properties as normal arithmetic if you did this, either. I’d have to look it up to say much more.

The ability to do so is more or less standard practice for anybody who gets into intermediate level math. At its root, math is just a set with a bunch of symbols that define how things in that set relate to other things in the set.

For example, let’s say the set is the set of primes

{2,3,5,7…}

Now let’s say that the “+” operator is a binary operator (takes two arguments), which starts at the index of the first number (which must be a prime), and moves forward in the set a number of spaces equal to the second number (which, for simplicity, we’ll say also must be a prime, but the number of spaces to move is equal to its ordinal in the set of natural numbers).

So 2+2 means start at 2 (the first number in the set), then move forward two places – which skips 3 and goes to 5, so in this system 2+2=5, 2+3=7, 2+5=13 etc (importantly, in this system addition isn’t commutative, while 2+5=13, 5+2=11).

Now, what I just defined isn’t necessarily useful in any way. But, yes, you can define any sort of fun things. Sometimes they’re useful, sometimes they’re mathematical thumb twiddling. However, you can’t just define any silly thing, tell mathematicians to go at it and expect them to come up with something practical or useful, however.

Thanks for the summary!

Leo, we’re entitled to posit any formal system we want. It will turn out to be useful, or not.

The advance of math has been a bit like the advance of science. We tend to think that Science advances linearly, that with each new theory we just get a new tool in the box to add to the old ones. It’s not quite so, at least not during scientific revolutions. The new theories contradict the old ones. Do most scientists shout “Yay, a newer better theory” and adopt it? No, they go to their graves clinging to the old (to which they’ve invested so much of their time and energy). The new scientists, having to choose, pick the one that’s the most fruitful, and that one “wins”. (If you’re at all interested in this kind of thing, read Kuhn’s “The Structure of Scientific Revolutions” which is worth reading just for the science history anecdotes.)

With math, it’s not quite as cut and dried, because we often CAN have it both ways. For example, we had the 5 Peano postulates of geometry. The first 4 were basically definitions, and nothing made much sense in geometry without definitions of that kind. The 5th, the one that says given a point and a line, there’s one line through the point parallel to the first line, always seemed more like a statement of reality, than a definition. It puzzled many great thinkers.

Then a couple guys came along and tried to show that it was implicit in the other 4 postulates, by contradicting it (one guy said no parallels, the other said an infinite number of parallels) and hoping to eventually come up with another contradiction, which would prove that the 5th was implied by the other 4.

They “failed”. Well, they failed to come up with any contradictions, but instead, came up with fully realized geometries that seemed totally bizarre, but were internally consistent. As it turns out, these geometries even have practical applications. So does the original one. Three different geometries, each completely contradicting the other two, but all useful! Image that.

With math, you get to propose any consistent formal system, and it turns out to be useful or not.

With modern number theory, we get an interesting special case. It’s either incomplete (there are theorems you can state but cannot prove true or false, so they’re NOT either true or false), or it’s inconsistent (i.e., useless). We tend to assume the latter, because it’s not useless. :wink:

But you have to do more than simply state the “new fact” or contradict the old one. You have to work out myriad details, and show that the new system has some interesting properties, for anyone to bother taking it seriously.

Oops: “assume the former”, not “assume the latter”.

I’ve noted before, possibly earlier in this thread’s long life, that the problem many people have with this is their conception of the 0.999…

To the layman, that seems to mean “sit down and write out nines, and don’t stop.” To them, the ellipsis denotes a process of doing something, so at any point there will always be a small difference. To the mathematician, the ellipsis means that all those nines are already there, on out to infinity.

I suspect you mean Euclid’s axioms of geometry. Peano’s axioms define the Integers (which is more appropriate a question given Leo’s question.)

For Leo:
The above (and a much earlier postings in this thread) give the main issue - is the system interesting or not? It isn’t hard to come up with systems that are either inconsistent without further work, or that are just plain dull or degenerate in some form. The inconsistent ones become dull because in order to make them consistent you end up just filling them with junk, and nothing interesting comes out.

Now take Peano’s axioms. 5 axioms, and we have the Integers. Add a handful of simple definitions or operations (like addition, multiplication etc) and you have simple arithmetic, and can quickly find yourself in interesting waters: prime numbers - unique prime factorisation theorem etc. The rationals follow quickly after, and then the whole thing explodes. A lifetime of interesting results - all from five axioms. Now that is interesting. The very careful addition of even just one more axiom in the right place - usually to explore some issue that is clearly interesting but isn’t provable one way or the other in the existing system and it explodes again. Parallel postulate, and the axiom of choice are good starters.

As TATG alluded to, one reading of 2+2=5 is a violation of one of the axioms (via the definition of addition and of 2) - that there do not exist two numbers that are the successor of any given number. Depending upon your reading of the new axiom, it may be that the entire system becomes degenerate, allowing you to now prove all sorts of useless things that wipe out all the interesting things. Or, with a more creative reading - such as Jragon’s it might stay interesting, at least a bit interesting.

Actually I had in mind systems of arithmetic which are non-trivial (not every sentence is provable), although not necessarily consistent (for some sentence, both A, ~A is provable). That is to say, I had in mind formal work that has actually been done (but that I’d need to go and look up to say more about). I don’t know if anyone has constructed such arithmetic whilst keeping it consistent.

In case the above is bewildering to anyone; you can have logics where A & ~A does not entail arbitrary B. Given one is treating the logic as a formal system with interesting properties, I don’t think this should scare anyone off. One might also want more from a logic, but in some cases you might just care about it doing formal work.

(erik, are you still here?)

If erik150x does not return to the thread, I, for one, will feel good. I will assume he learned what we tried so hard to teach him, and is taking some time to adjust his intuition accordingly.