Believe me, mathematicians know what QED means. Apparently you do not, since you have shown nothing.
Please describe where my proof resting on Dedekind cuts and the properties of rings fails. Whats that? You can’t? You don’t even know what it is talking about? Why fancy that.
But you’ve “read this thread”. Well that’s settled that then.
Clearly not. Q.E.D. is usually appended to a proof not an assertion, so there’s something else you’ve read and haven’t understood.
And on preview, what ultrafilter said, AND, what the hell is your caveat about? Was that English? No new rules were required for non-Euclidean Geometries.
Tsk!
Mm, sorry, my memory of mathematical history was a little vague. I didn’t mean to stir the pot, but it was a controversial and slightly new thing to invent the idea that you could do consistent geometry on curved surfaces. Triangles with angles different than 180 degrees? Pish-tosh! I remember that much.
Breathe easy. I’m on the side of sanity here. Yeesh.
And you assume this from the few posts I’ve made here at the Straight Dope?
I know there are infinities in basically everything. There are infinite values between 1 and 2. If I press my thumb tips together, there is an infinite amount of space between the ridges of my fleshy pads. Between red and blue, there are an infinite number of hues. There are potentially infinitely smaller bits of undetectable matter inside an atom. Please don’t insult my understanding of what “infinite” means.
I just don’t understand how a non-terminating value (.999…) can terminate (1).
Non-standard analysis is a different beast altogether, and whatever conclusions you can make about R* have no bearing on R. Besides, based on what little I know about R*, it should be the case that .9…* = 1*. There’s a better introduction over at Ask Dr. Math.
1 doesn’t terminate. We actually should write 1.0000…, but we habitually suppress the repeating zero.
Look at the definition of decimal representations that I posted earlier, and at the Dr. Math article I linked to (the one about the OP, not non-standard reals).
Actually, space and the e/m spectrum are not “infinite.” Infinity does not exist in the “real world.”
For instance, people have calculated pi to millions of digits. This has absolutely no representation in the real world; the greatest possible cosmic circle has a ratio of circumference to diameter – pi – that can be expressed with fewer than 1,000 decimal digits, and still exceed any possible accuracy at the microscopic scale.
And, again, if you believe that there is a positive number p between .999… and 1.0, please tell us what it is. Once you do, we simply continue adding a few 9s and the sum falls closer to 1.0 than p. No matter what p you name, we can show that .999… is closer to 1.0 than that. That’s the definition…
Historically, the deniers are in interesting company: Charles Lutwidge Dodgeson (Lewis Carroll) was also unclear on limits, and made the same argument (as quoted by Martin Gardner.) He said that the difference between the infinite sum and the limit might be made very small, but that it was never eliminated.
What he, and our correspondents here, are failing to realize is that the difference can be made arbitrarily small. It can be made smaller than anyone’s challenge to it. It can be made smaller than any specified measure of smallness.
Maybe that’s what it comes down to… simply pointing out that, on the real number line, there is no useful difference between 0.999… and 1 whatsoever. They are mathematically equivalent in any equation in which a real number can be placed. Therefore, any perceived ‘difference’ by anyone who cares to claim there is one is completely a philosophical one, and has nothing to do with mathematics.
Doesn’t resolve the philosophical problem that it’s been turned into, but you may not be able to. Anybody interested in the mathematical properties of 0.999… can, however, safely and simply resort to 1 happy in the knowledge that all their formulae will work out.
(Consider this a cheap hack to diffuse the issue – I’m still of my former opinion proven countless times).
Seriously, the concept of limits and infinites was very controversial when Newton and Liebnitz invented their versions of calculus. But it is NOT true to say that calculus just assumes that 0.9999… is close enough to 1.0000… as to not make any practical difference. Calculus asserts that there IS no difference. You have to use some fancy tricks, but through calculus you can rigorously prove that they are equal. Not just infintesimally close, but identical.
Now, in order to do that you have to accept some axioms. Like if a=b, and b=c, then a=c, stuff like that. If you are willing to accept standard mathematical axioms, then you can rigorously prove that 0.999…=1.0000… If you assert that 0.9999…!=1.000…, then you have to assert that some of our standard mathematical axioms should be rejected.
But if you do that, then most of mathematics collapses. Rejecting 0.999…=1.000… means you have to reject 1+1=2. That’s fine, if you want to do that. But this sort of math is not exactly useful, since you can’t make any definitive statements in it. Proved one way 1+1=2, proved another way 1+1=3. What good is this kind of math? An internally inconsistent math isn’t very interesting.
You know, it’s interesting that all the proponents of 0.9999 != 1 never seem to bother to refute the proof I gave in the beginning. If they are right, then that proof is wrong. So where is it wrong?
<sound of silence>
Let’s review:
Let x = 0.999999…
Then 10x = 9.999999…
Subtract: 10x - x = 9.99999… - 0.99999…
9x = 9.0000…
x = 9/9 = 1
Therefore, 0.9999… = 1
Q.E.D. (correctly used)
If 0.99999… != 1, then there has to be something wrong with this proof. What is it?
And to head things off, can see two objections:
Step 2. Arguing that 10x != 9.9999… What’s the basis of this? Doesn’t multiplying by 10 shift the decimal point and leave all the digits alone? Why would it change any digits in this case?
Step 4. Arguing that 9.9999… - 0.99999… != 9.0000… But both numbers are denumerable, i.e., for each decimal place in the first, there is a corresponding decimal place in the second. Since each decimal place is 9 for both numbers, you can perform the operation 9 - 9 = 0 for any decimal place. Therefore, subtraction makes every decimal place equal to 0. So the conclusion that the difference is 9 is valid (unless you want to postulate that at one point 9 - 9 != 0).
So let’s hear it. Disprove this or get off the pot. Because as long as this proof exists, you cannot argue your case.
I’m betting, though, this post, like all my others, will be ignored by the 0.9999… != 0 crowd. And I’ve learned from my years on the Internet that silence = you made your point so I’ll ignore you.
Yes Lemur866, me personally, I know 0.999… and 1 are identical by limits. I’m just not sure there’s a way to convince people that the proofs we’re using are logical, true, and time tested on the real number line.
I just offered my suggestion as an alternative consideration, and its effectively the same thing. No matter what mathematical operation (basic ones, like multiplication, addition, etc, or entire functions like sin, cos, whatever) you perform on 0.999…, the error e resulting from not being identically 1.0 is meaningless, or can be made so by making e as arbitrarily small as you like.
It comes to the same thing, but much more informally. It’s not proof, and its not strictly correct to suggest that such a difference exists, but if people insist on claiming there is one, its easy to counter by saying ‘show me where it can possibly make the slightest bit of difference’.
Eh, easier if people accepted the basic high school algebra proof already shown.