I see, so my .0002 are very large by comparison.
I’ve never studied mathamatics, other than algebra and trig., …
but in my ignorance I contend that .999999999… however far it’s carried out is not 1.
And I am not able to defend this.
And I will not be convinced otherwise.
Unless someone pays me, then it’s ok with me.
What’s a few .000000000000000000000000000000000000000000…1 between friends?
How many zeroes are there between the last one you wrote and the 1?
Bless you, Bobot. ![]()
I will be sending you roughly a trillion infinitesimal dollars in the mail shortly to not be swayed!
Could you please put that in a very large box, like a refrigerator box, so I know it’s from you?
![]()
Peace
bobot: There’s nothing wrong with using “0.999…” to mean something different from 1. It’s just not how mathematicians standardly use that notation. The way mathematicians standardly use infinite decimal notation has the result that “0.999…” is defined to be equal to 1. There’s not much to argue about, since it becomes, for them, true by definition.
Everyone else: There’s nothing wrong with using “0.999…” to mean something different from 1, except that it is nonstandard. There’s nothing wrong with saying the difference is 0.000000…1, with infinitely many zeros before the 1. This is perfectly formalized by the interpretation of infinite decimal notation as representing the hyperrational given by truncation to a fixed infinite number of decimal places, which probably captures quite well what bobot would end up sketching out, in their own amateur, inchoate way, were they pressed to spell out their intuitions.
Actually Bobot, They don’t take up too much space, I will be sending in a small envelope. Just in case you still can’t see them there will be two marks inside, labeled .999… and 1. You will find them taped right between those two marks.
Agreed. It’s just that:
(1) It’s not the generally-agreed “standard” definition of real numbers. If you want to argue about it with mathematicians, and not be laughed at, you need to make this clear, e.g., by saying you are talking about a kind of non-standard analysis.
(2) Even if you do talk about a system using infinitesimal numbers like .000…1 (i.e., the difference between .999… and 1), you need to make sure that your axioms (a technical word that logicians and mathematicians use for assumptions) and definitions build a consistent mathematical system. If it’s inconsistent (e.g., because you can prove that 1=0), then your system is useless.
Ok, I thought I had it understood until here. How can you say that there would be infinitely many zeros BEFORE the 1? Wouldn’t the 1 never show up because the zeros just keep coming? And then 1 at the end implies a definite termination at some point, which is what .999… does not do. It never terminates.
I’m not going to spend too long getting mired in this one, but I don’t know if anyone mentioned that if you take 0.9999… and multiply it by 10 to get 9.9999…, they both have the same number of 9s after the decimal point - an infinite number. You’d like to say that 9.9999… has one fewer, but that’s not how infinity works; there is no such thing as “infinity minus one” because infinity is not a number you can count up to.
Therefore 9.9999… - 0.9999… equals exactly 9 and not an infinitesimal more or less (because every 9 after the decimal point on one side has a corresponding such 9 on the other side), and we’re off to the races.
I’m not expecting that this will settle a damn thing, but hey, at least I’ve had my 2 x 0.999… cents’ worth.
It doesn’t make sense in standard arithmetic, yes, but there’s no particular reason not to use the notation in an alternative math system. The notation is, after all, just some digits, some dots and another digit – perfectly finite. Note that even in standard arithmetic, 0.9999999… does not mean the 9s “keep coming”. The 9s are already there, there’s just an infinite number of them already there. ![]()
As a somewhat orthogonal example, what if I wanted to use an alternative ordering (using << instead of <) of the natural numbers where:
[ul]
[li]a << b means the same thing as a < b if a and b are both even or both odd.[/li][li]a << b is always true of a is even and b is odd.[/li][li]a << b is always false of a is odd and b is even.[/li][/ul]
Then we could write the natural numbers in order as:
0,2,4,6,8,…,1,3,5,7…
And the ellipsis in the middle doesn’t prevent us from considering the ordering to the right of it.
Assuming that definitions are the ones people commonly use, is there any reason to use the notation .999… for anything other than showing that the result of an arithmetic operation produces an infinite series of 9s? Is there a reason at all? After all the value is 1. Or is it ‘one’?
Okay, please try to keep it to the idiot level of conversation so I can understand.
How does it ever make sense to say .000000…1? There is an infinite number of zeros. By definition it can never terminate (even under an alternate system or any system). You literally never, ever, ever get to the point where the 1 appears. Because if it did appear, you would have a terminating, fixed and defined number. Which can’t happen because the zeros are infinite.
The ellipsis means it’s ‘already happened’, an infinite number of 0’s have ‘already passed’, you do “get there”, by definition.
ETA: i.e. one defines ". . . " to mean an infinite number of them have ‘already passed’
Then you have a contradictory definition. Infinity never passes.
To say it’s a contradiction, you have to show that defining things that way leads to a contradiction in that system, in that notation.
You can define whatever you want. Then you see if it leads to a contradiction within the system you’ve defined it. I think the reason you believe the definition itself is contradictory is because you’re considering it within the ‘usual’ rules of numbers which don’t necessarily apply to infinities, or at least can be defined not to.
It’s all about finding a suitable interpretation.
We can choose to interpret the word “infinity” like so: to say a property definitively holds of “infinity” is just to say that it holds of all natural numbers beyond some point. And to speak of any function of “infinity” is just to construct a term which you might later use in speaking of some property of “infinity”.
This is often not the way you would want to interpret that word, but it is also often a useful way to interpret that word. Different strokes for different contexts.
(Note that, on this particular interpretation of “infinity”, we will have that “infinity” is definitively greater than 0, greater than 1, greater than 2, etc. But “infinity” + 1 will be even greater than “infinity”, and “infinity”^2 will be greater than those, and so on. [We will also have that “infinity” fails to be definitively even and fails to be definitively odd, but is very definitively (either even or odd). If this last part makes you unhappy, we can go ahead and ascribe “infinity” properties at random beyond (but consistent with) the basic ones we’ve already settled, till there’s no uncertainty left.])
Now having done that, we can talk about 1/10^“infinity” and its decimal expansion.
In general, 1/10^n has a decimal expansion of the form 0.000…1 with n many zeros before the 1.
And thus, we can definitively claim, on this interpretation of our mathematical language, that 1/10^“infinity” has a decimal expansion of the form 0.000…1 with “infinity” many zeros before the 1.
If you multiply it by 10, you get a decimal expansion with “infinity” - 1 (which is smaller, but still infinite in the sense of being greater than 0, greater than 1, greater than 2, etc.) many zeros before the 1. If you divide it by 10, you get a decimal expansion with “infinity” + 1 many zeros. If you square it, you get a decimal expansion with 2 * “infinity” many zeros.
All of this is perfectly consistent. It’s all given by our simple rule (to say something holds of “infinity” is to say that it holds of all sufficiently large natural numbers).
It’s no different than if we’d notated it as a pair of numbers, for ecample: (0.999…, 1) “After” is not central to the concept, I think.
Aha! The game is a-foot! This four-page (and counting…), multi-zombified thread, having beaten to death the discussion of what 0.999999… means (a discussion, by the way, that will not die) . . .
We are now moving into new territory! Now, let’s beat to death what 0.000000000000…1 means! What fun!
Okay, I can see a way to give a sensible interpretation of what 0.000000…1 would mean, analagous to 0.999999…
As with 0.9999999, we consider a sequence of numbers, each with one more digit:
0.1
0.01
0.001
0.0001
…
0.0000000000000000000001
…
0.00000000000000000000000000000…00000000000000000000000000…1
ad infinitum.
Now, as with 0.99999…, we stand back and look at this sequence, and ask: Does this sequence approach a limit? If it does, then define the notation 0.000000000000…1 to be the value thus approached.
The result: Yes, it approaches a limit. The limit thus approached is 0. Thus, the notation
0.00000000000000…1, by this definition, is exactly zero. Not infinitesimally near zero. Exactly zero. And this approach, essentially the same way we got the meaning of 0.999999… gives us a consistent result too: 1 - 0.9999… sure enough gives us 0.000000…1
In other words, 1 - 1 = 0
Hey, I knew that! ![]()
Of course, the only reason you can say it is “exactly” zero, rather than infinitesimally near zero, is because you’ve baked into your (or, rather, we’ve baked into our conventional) definition of “limit” that infinitesimals are zero.
That is, we can define another notion of “limit”, where the limit of a function f(n) as n approaches infinity is the value f(“infinity”), interpreted as in my last post. Then the limit will be 1/10^“infinity”, which will not be zero. It will only be infinitesimally close to zero.
Now, if you happen to decide that you no longer care to distinguish between values whose difference is infinitesimal, you will recover the conventional definition of limit. In some sense, that’s the motivation of the conventional definitions. Very often, one has the tools to show that two values are infinitesimally close, and does not care about infinitesimal differences, so one might as well identify them.
But if one did care about infinitesimals, for whatever reason (even no reason other than to explore the idea of infinitesimals), you could easily use a suitable notion of “limit” which would make everything go through quite similarly, but end up with “infinitesimally near zero” and not “exactly zero”.
So a lot people mocked me and made fun of me… and perhaps to some degree I deserved it because I was not well prepared enough to defend my position. touche.
But now we have come full circle… if you look back at my original post: #35http://boards.straightdope.com/sdmb/showpost.php?p=15347143&postcount=16
You’ll see I was basically saying what we have all mostly concluded here.
Built in to the definition of calculus with limits is the very notion that .999… = 1
The theory of Limits asks us to simply accept this as fact. Although one could argue it doesn’t actually ever say this is some absolute truth, but rather if you want to do math without a lot of headaches, we suggest you make the assumption that .999… = 1, although it may not actually be true. Others may say that because it works so well, it must actually be true, a reasonable position for some. Yet Limit theory offers us no proof!
Great Antibob refuted my claim that there is no evidence outside of using limits, that 10 x .999… = 9.999…
I re-post here:
We’re not just “accepting” that there is a 9 there.
Here’s an example of one of the “contexts” where infinity is actually defined.
Write 0.9999… in a different form:
sum(i = 1 to “infinity”) [9/10^i]
Note that the “infinity” in the index of the summation just tells us not to stop adding more terms ever.
Now, multiply this by 10:
10*sum(i = 1 to “infinity”) [9/10^i]
We can bring the 10 “inside” the sum:
sum(i = 1 to “infinity”) [10*9/10^i]
Now simplify:
sum(i = 1 to “infinity”) [9/10^(i-1)]
Written in a more ‘standard’ form, this is 9.99999…
There is no ‘0’ at the end at all. Nor are we “adding” any digits at all. We’re just multiplying by 10.
I didn’t like it then and still don’t. This proof uses Limits. Without Limits you can’t do this, it’s what allows you to sneak in the extra 9 at the end [9/10^infinity]. Therefore again outside of assuming .999… = 1 via Limits, it is no proof at all.
I don’t reject Limit theory at all. In fact as I have always interpreted it, it is a useful tool when you don’t really care about 1/infinity or other such situations. But it is and never has been in my mind some absolute truth.
It also true that 99.999…% it probably makes no earth;y difference in any practical way whether they are equal or not. But some of us form a purely mathematical metaphysical point of view would like to know if they REALLY do = each other. Is that so wrong? Is it so wrong to believe they don’t? After all tell me how you ever really get to 1 from .999…?
I keep hearing they are different notations for the same thing (well with limits indeed they are), but Limits just make the assumption they are. For practical purposes this makes a TON of sense, but when your asking the question, is .999… really equal to 1? It makes no sense to say of course they are because I assume they are.
I want to you all to know I have done the math, just finished a while ago and my hand is really cramped. But the answer I got to .999… was .999…