The point is, if .9… < 1, sqrt(.9…) > .9…, and so we’re back to the question of what number lies between them.
Now that’s interesting in and of itself.
Axiom: sqrt(x) > x for all real x where 0 < x < 1.
Assuming .999… is a value that is strictly less than, and not equal to 1, but that no other real fits between them (as Phage and others suggest), then:
sqrt(.9…) has to be bigger than .999 since sqrt(x) when 0< x <1 is always bigger than sqrt(x)… but it has to be less than 1, too, because otherwise its square would be 1 or larger.
So sqrt(.999…) itself is a number lying between .999… and 1, implying that .999… is not ‘the number that is next lower than 1 but that no other value can be between them’.
And then, that means sqrt(sqrt(.999…)) is between sqrt(.999…) and 1, too.
In fact, any number of ‘numbers’ could be thus invented that lie between .999… and 1.
This implies that either there is a separation between .999… and 1 that is nonzero, or that one of these values is identical to 1 or identical to .999…
But if one of those values is identical to .999… then its square and square root are .999… breaking the inequality that sqrt(x)>x for 0 < x < 1.
And if one of those values is identical to 1, then its square and square root are equal to 1, again breaking the inequality that sqrt(x)>x for 0 < x < 1 (and as a side effect proving 0.999… = 1, as a side effect if we want a shortcut, but let’s go on).
We’re left with the inescapable conclusion that since we’ve walked into a contradiction, the original assumption that 0.999… is not equal to 1 is wrong.
So not only can .999… = 1 not be ‘the next lowest number below 1 but that no value can come between them’, it is specifically equal to 1.
Just another of the many ways to prove the point.
I can use consistent mathematics to show 0.999… = 1. Can any of the naysayers come up with a set of consistent mathematics to show the opposite?
I would like to apologize on behalf of classicists everywhere. We are surely embarrassed to have people like Tarantula rudely insult other people and promote falsehoods. (But what can we expect from people who can’t understand that .999… = 1 ?) People who study Greek and Latin literature are usually caring, kindhearted philanthropists. [However, since there is not a great demand for Latin scholars, there is little competition to get into the programs (relative to math, med school, etc.), and idiots can sneak in.]
I still say that “-atum” does not accurately translate what Euclid said. Tarantula said that the gerundive “-andum” is a translation inferior to the perfect participle. I feel that he is being intentionally dishonest, or seriously misunderstands what Euclid meant. Based on the grammar, and style of his proof, Euclid can only mean “the very thing which was to be demonstrated.” “Quod erat demonstrandum” is a fair rendering of this.
By the way, I am only a senior math/classics major, but I plan to continue both in grad school.
Wow… I just did a google on 0.999… = 1, and the number of pages were enormous. Seems this really is a long raging debate.
IANA mathematician but I think I have something that is not exactly a proof but may help in convincing the scoffers.
I’ve always been taught that a repeating decimal ALWAYS represents a ratio of two integers, x/y. I don’t have the proof for this but every reference claims this and one of you mathematicians probably have a proof handy; hopefully one that doesn’t beg the question.
So, since it’s true that a repeating decimal always represents a ratio of two integers, and since it’s also true that 0.999… is a repeating decimal, then if follows that 0.999… must represent a ratio of two integers!
So, if 0.999… does not equal 1/1, what integer ratio does it represent?
:smack:
That should have been “then it follows that”, rather than “then if follows that”.
Wow. That’s brilliant. Short, neat, devastating, and utterly obvious once someone smarter than you points it out.
Now why couldn’t you have come up with that thought 6 pages ago?
P.S. If you’re so freakin’ brilliant, howcum your user name is misspelled?
::momentarily forgetting which side of the argument he is on The Great Unwashed foolishly types::
999...999
__________ , presumably
1000...000
Yep, the square root arguement is the best one yet. Kudos, MC.
I don’t think so. I think your number would be something like 0.0…999… ?
I’m not sure if my notation is correct but what I mean is that you would have infinitely many zeroes to the right of the decimal before the infinite series of nines starts.
davidm,
Er, I don’t know what you mean, though for one second you did make me do a double take.
I was appealing to a “common sense” notion that if
9/10 = 0.9
99/100 = 0.99
999/1000 = 0.999
then,
infinitely many nines/1 followed by infinitely many zeros = 0.999…
It’s been a long thread, I mean day.
What I was thinking was this: 9/10 = .9, 9/100 = .09, 9/1000 = .009, etc. You add one zero for each power of ten. So for your number you’d have an infinite number of zeroes before the first nine. I know that it doesn’t work that way; that’s why I added the smiley. Actually, I believe that your number is infinity over infinity which I think is undefined.
I am by no means a mathematician, and am only an amateur classical scholar, but I agree entirely on this point. Further, -atum does not convey an appropriate meaning given the context. The only place in which that form might be appropriate (although not traditional) in a proof would be as a marginal note to an assertion proven elsewhere in the same work. However, other notations fill such roles already. The construction is not appropriate as a conclusion, and it has been clearly demonstrated that it is not used as one by mathematicians.
1/9 = 0.111…
2/9 = 0.222…
3/9 = 0.333…
4/9 = 0.444…
5/9 = 0.555…
6/9 = 0.666…
7/9 = 0.777…
8/9 = 0.888…
9/9 = 0.999…
This is all invalid using Phage-math, anyway.
Well, not to speak for Phage, but surely if you can’t perform the operation “(0.999… + 1) / 2” because 0.999… is not a “real” number, then sqrt(0.999…) would be similarly incalculable.
Yes, of course, Achernar, you’re right. But if you can’t perform operations that are valid for all reals, the discussion is moot, since 0.999… wouldn’t be a real of any kind, and its comparison to 1 would be meaningless.
It is one of Phage’s arguments that you can’t do math on 'em, but this doesn’t leave us with anything, including any of Phage’s objections of comparison or difference to 1.
kabbes said:
I agree except for the very last part “and don’t need to do the multiplication-and-sutraction thing anyway”. To do the multiplication, we have to know that the series converges. But that does not mean we have to know what the limit actually is. It is easy to prove that any decimal representation (with a finite number of digits left of the decimal, and an uncountable number of digits to the right) always converges (let’s call this the Decimal Convergence Theorem).
So, seeing a number like 0.999…, we know from the DCT that it converges. Thus it has a limit, and so let’s calculate what that limit is. That’s where the simple algebra proof comes in. I think the only mathematical complaint that can be made about it is that there is no explicit mention of the fact that we know 0.999… converges. But even primary school students intuitively understand that decimal expansions converge. I see no need to mention the convergence theorem (just as we didn’t mention the theorem that allows us to subtract the equations).
When I was a graduate student I took Advance Calculus, which was taught by a “cretin from Mars”. At the end of the course, he gave us a sheet of six propositions to prove. We were to prove them, and then come in and present the proof to him in person.
The entire semester’s grade would be based upon this presentation.
Before the test, Professor Martian stated that he gave “Mostly Cs, a few Bs, and maybe an A if you’re God”.
I did the work and went in for my presentation, feeling fairly cocky. I put the first proof up on the board, and he asked me to justify going from step three to step four, which was the transition from the general to the infinite case.
“Ummm…”, I replied.
“OK, put up the next problem.”
I did so.
“Again, how do you justify going from the general case in step three to the infinite case in step four?”
“Ummm…” This was not good.
“OK, put up the next problem.”
“Well, I’ve made the same assumption in each problem.” I was getting nervous.
He asked, “Well…can you give me the epsilon-delta definition of continuity and convergence?”
Finally! One I could answer, and I rattled off the definition verbatim.
“Very good. I’ll give you an A”
You could have knocked me over with a feather. It wasn’t until I was taking abstract algebra next semester that I finally realized that the reason he accepted my answer was that the key to justifying the transition from the general to the infinite case was indeed the epsilon-delta definition of continutity and convergence.
The professor should have flunked me for missing the entire point of the course, but he was a cretin from Mars. I eventually did complete my master’s in mathmatics.
The point of this rather rambling post is to illustrate that, when you are dealing with infinite series, you have to apply different rules than when you are working with finite sequences. Phage doesn’t seem to grasp this.
Actually, Phage seems to be going out of his or her way to resist grasping this, with every fiber of his or her being. It puts me in mind of the definition of a neurotic vs. a psychotic – a psychotic thinks that two plus two equals five. The neurotic knows two plus two equals four – but hates it.
I (and the other math types will probably agree with me here) really don’t like relying on intuition. After all, it’s intuitive that .9… != 1, but that’s not true. Let’s be explicit in all of our steps.
Wow. I just got it. I mean, really got it.
0.999… really does = 1
Man, I just saw, like, infinity. And I’m like “Whoa. Dudes and dudettes! My god. It’s, like, full of stars!”
Then I’m like you know, “Awesome.”
Peace, love, unity, respect.