An Infinite Question

I’m sure he meant rational. In case there are any folks out there who might be confused, a rational number can be represented by a fraction while irrationals cannot. So 5/8 is rational, pi is irrational. 14/7 is rational, e is irrational.

GD, I would love to help you understand more of this but I honestly can’t decipher your latest question. If I ever stopped typing 9s, I’d end up with a number ever so slightly smaller than 1. I’m not following your assertion that it is equal to 2 or any other number.

It’s a consequence of the positional notation that we use in the decimal system. .99 is a representation of 99/100, but it more specifically represents 9/10 + 9/100. Likewise, .999 really means 9/10 + 9/100 + 9/1000. As you add more and more 9s to the end, you’re adding 9 divided by larger and larger powers of ten. What that means is that the difference between .99…9 and .99…99 is getting smaller as you lengthen the number of 9s hidden by the ellipsis. Furthermore, you can make the difference arbitrarily small by adding enough 9s to the end.

Now, one of the more interesting facts about the real numbers–and this is something that they don’t teach in any high school math course that I know of–is that whenever you have a sequence like this, where the differences between successive terms is decreasing down to zero, then there’s a unique number that the terms of that sequence are getting arbitrarily close to. We refer to that number as the limit, and define the value of an infinite decimal representation to be its limit. In the case of .999…, the limit is 1, and that’s the number that the decimal represents.

Okay, I think I’ve got it now. My inner perfectionist insists that there’s still a remainder than can’t be found which separates .9~ from 1, but I’ll just give him a lollipop and move on to use this new knowledge for… something. I’m not sure how I could use it. But thanks for explaining it anyway, folks.

aceplace57 said:

Because that would have been more confusing and required more definitions to be explained?
statsman1982 said:

I think you mistyped here. I believe you meant “rational”, because 1/3 is written as the quotient of two integers: 1 and 3.

General Derangement said:

The very next line in the column is:

In other words, Cecil stated a hypothetical thought where someone proposes that the tilda (~) represents a process, then he pronounces that thought nonsense.

You misunderstand how mathemeticians work. They have to make the definition of “no difference whatsoever” quantifiable in some way. The way they do that is propose that for any arbitrary size difference you consider, the difference will be smaller than that arbitrary size. That means if you want to consider the 20th decimal place, the difference is smaller than that. If you then go to the 30th decimal place, it is smaller. Then you go to the 1 millionth decimal place, and it is still smaller. That is effectively saying there is no difference. There can’t be a difference, because if there were, you would find an (arbitrary) point where the difference was bigger than the point under question.

The problem is not with 1/3 or with 1. The problem is that that fraction cannot be represented in decimal form in base 10. That is just the nature of the number system being used to represent the value in question. So mathematicians use the tilda (~) to represent “repeating forever and never ending”. They state it in formal terms talking about infinite series and limits and whatnot, but that is what it means. “Repeating forever and never stopping.” Any value you use in a calculation that terminates is not equal to that value, it is just an approximation.

.9 is just an approximation.
.99 is just an approximation
.999 is just an approximation
.999,999,999,999,999,999,999,999 is just an approximation
.999…9 is just an approximation.

They are useful for us to perform calculations, as long as you make sure you project your approximation beyond the level of significance that you are considering.

The tilda represents “repeats forever”. That is a symbolic representation, not an approximation. 0.3~ is every bit the same as 1/3. It is just a different representation. It is not a process, it is a symbolic presentation, the same way that π is a symbolic representation of the value pi. It is an exact quantity, but we can only approximate it for calculations.

pi is not 3.14159. It is a number that goes on forever without ever repeating (at least as far as we have been able to determine with supercomputers).

.9~ does not terminate. Any calculation you perform where you terminate is an approximation of .9~, but not exactly equal to .9~. The only way to reach .9~, then, is to carry the ~ forever, which then reaches no difference from 1.

ultrafilter said:

I learned about it in my high school Calculus class. Maybe you just hang out at the wrong high schools. :wink:

You don’t need the parenthetical part. The decimal expansion of pi does not repeat (in the sense of there being a specific sequence of digits that repeats over and over indefinitely once you get beyond a certain point in its decimal expansion). If it did it would be rational. Not only is pi irrational, it is transcendental.

Smaller than any nameable number works like this.

Can we find a place where the difference between .999~ and 1 is less than 1/1000000? Yes.

Can we find a place where the difference between .999~ and 1 is less than 1/10[sup]100000000[/sup]? Yes.

Can we find a place where the difference between .999~ and 1 is less than 1/10[sup]100000000[sup]10[sup]100000000[/sup][/sup][/sup]? Yes. And so on, for any finite number you can express no matter how small that number is.

The finite number is always nameable. Mathematicians can prove that if you can show that the difference is smaller than any finite number then the two numbers are in fact one and the same number. They must be identical, because otherwise you could name the difference between them as a number. But you can’t. Therefore is no difference between them.

This is not precisely true. π/7 is a fraction, but it is irrational. A rational number is a number which can be expressed as a fraction of integers; irrationals are all real numbers which are not rational.

And while I hope that he meant “rational”, it’s not clear he did; he may be mixing up the concept of a decimal expansion that doesn’t terminate with irrational numbers (a common misconception among my high school students).

Yes, you’re right. I really did mean to write and 1/3 is rational, but it is a repeating decimal, so it cannot be written as a finite decimal."

The all-important point here. Thank you.

We would not even be asking this question if we used base 3… or base 9, being the square of 3. Or, for that matter, base 6. Or any other base that is evenly divisible by 3.

In base 3, we would simple have 0.1, or .1. Just as 10, III represent 3, rather than 10, and 100, III represents 9, rather than 10 squared (100), and the first figure is a drop of one power of 3 from 1.

Just sticking to base III for now, we would find that certain fractions like 1/2 and 1/5 would not come out even, but be interminable. 1/2 would come out to 0.1111111… or 0.1~ in III.

In case you are wondering which base would have 1/3 come out to 0.3, it is base 9. 0.3 (XI) = 0.33333… or 0.3~ in base 10.

Really, General, if you want to ponder such mathematical matters with any really significant progress, you have to disengage form the limitations of base 10 “dogmatism” because neither number theory nor numerical analysis has any anchoring in base 10 as special.

It’s just an historical accident that that is the base for our place-value system.

If by “historical accident” you mean the evolution of humans with ten fingers, yes. :stuck_out_tongue:

You mean eight fingers and two thumbs, so :stuck_out_tongue: right back atya’! :slight_smile:

But, seriously, haven’t we seen systems based on 20 and 12, and so on, albeit without developing into place-value systems?

Twenty because that is the total of fingers (in the broader definition) and toes. This has been called “barefoot arithmetic.” Twelve because ten plus two feet add up to twelve. (A “shod” arithmetic?) We see the heritage of twelve-arithmetic in some measurements, such as the twelve inches used in the English system of measurement. ISTR that a troy pound was 12 and not 16 ounces. And who could not know of 12 hours in each of two cycles making up a solar day?

And didn’t the Babylonians favor counting by sixties?

Getting back to my crack about really having eight fingers, excluding thumbs of course, I think it would have been great if there had been a taboo or something against using thumbs in counting. We might have had base-eight arithmetic prevail in the world. The advantage of that is that base 8 so easily converts to the all-important base 2. I feel that base-16 would have been even better, because it is 2 to a certain power or 2, and so is used to help us humans read binary from computers. But I see that as the less likely, by far, to have developed as an alternate to actual history.

So, yeah, I’m standing by what I said.

Nyah, nyah! :wink:

I think what DS may be getting at is the idea of infintesimals.

First of all, what is an infinite set? Gallileo first happened upon it when he noticed that if you look at perfect squares then 1->1; 2->4; 3->9, etc. He knows that there are as many perfect squares as there are natural numbers but that the perfect squares are a (proper) subset of the naturals so paradoxically it appears that there are 2+4+6+8+… MORE natural numbers than perfect squares.

It wasn’t until Cantor (who was insane at the end of his life. the question is did he understand infinity because of the insanity or did delving into the nature of infinity drive him insane?) defined an infinite set as one that can be placed in 1-1 correspondence with a proper subset of itself. While this resolved the (apparent) paradox Galilleo noted, it does lead to some anti-intuitive results when we carry over ideas from finite sets into the realm of the infinite as we realized when we started studying convergence vs. uniform convergence.

So lets get back to DS’s post. Mathematically, 0.99999… = 1 is easy to prove. Suppose they’re not equal, then 1 - 0.99999… = d with d not equal to zero. Well if d does not equal zero then it must be bigger than 10^(-n) for some natural n.

For math majors out there, this is more of an intuitive justification that a nonzero number must have a non-zero digit at one place value. n is the position of the place value. But what is 1 - 0.99999…9 with (n+1) 9’s? It is 10^(-(n+1)). From here we can show that 1 - 0.9999… < 10^(-(n+1)) but we have the assumption that 1 - 0.9999… = d > 10^(-n). Therefore by contradiction d must equal zero.

But this sort of proof depends in some point on the idea of limits. Basically 1 - 0.99999…9 [n decimal places] = 10^(-n) which tends towards 0 as n approaches infinity. However, limits never tells us what happens AT the limit point. For example, what is 0/0? The limit of x/x as x tends towards 0 is 1 so 0/0 is 1 right? :rolleyes: DS notices that 0.9999…9 < 1 for however long the string of 9’s are so intuitivly 1 - 0.999999… = a non-zero number d.

But we just “proved” that d = 0. Actually we proved that d is non-negative but is smaller than any positive number so we assume it’s zero, but some mathematicians (albeit a few) believe in numbers called infintesimals that are non-zero but smaller than any positive number.

“Believe” probably isn’t the correct word to use here, since the hyperreals are a perfectly legitimate structure - just not one used by most mathematicians, since the reals themselves suffice to do analysis, and tend to be (in my opinion) easier to work with.

Hmm, I have just done some checking, and there has been an octal system, as described here, from Wikipedia.

The reason, of course, is different from my hypothetical of excluding thumbs.

Also, elsewhere, there are linksto various other numbers systems, such as based on 5 and even one that is 5-25.

Several people have mentioned infinitesimals/hyperreals/etc., but note, such concepts do not really help salvage the OP’s thinking. Even when working with hyperreals/infinitesimals/etc., one never does so by constructing a system in which decimal notations are unique, so that 0.9999… and 1.000… must be different by virtue of looking different. The only motivation for achieving such a thing would be a superstitious focus on the mere notation; it leads to an ugly system, not a useful one (Is 0.999… to be the largest number below 1? If so, why should 1 have the property that there is such a largest number below it, while other numbers (e.g., 0.999… itself) do not have this property? [And if not so, then what number would fall inbetween 0.999… and 1? And why isn’t this number what was meant by “0.999…” instead?]).

Every child comes to understand and not quibble with the linguistic convention that “1/5” and “2/10” mean the same thing, just expressed differently; two different notations that happen nonetheless to specify equal referents. There’s not some tiny difference between 1/5 and 2/10; they are exactly equal, despite my having written each differently.

The situation is the same with “0.999…” and “1”; they mean the same thing, they just express it differently. To understand this, one need simply know what they mean, and to know what they mean, one need simply ask those who actually use them what they mean by them (i.e., any old mathematician), and then it is obvious. There is no point quibbling with the mathematician; the mathematician’s claims are true by definition. This is not so much a question of mathematics as of lexicography.

One might propose investigate alternative systems crafted by alternative definitions, but, as I noted above, in this particular case, those alternative systems are very ugly and not very useful (that, after all, being the reason mathematicians rarely talk about them in the first place).

I may have to use this next time I teach begging the question in logic. How can you show that they are actually the same?

Using limits? By using limits I can “prove” 0/0 = 1.

10 x 0.9999999… = 9.999999999… ? To justify that the numbers after the respective decimal points is to make the conceptual leap that taking a 9 from an infinite string of 9’s still leave the same number of 9’s. Some may see why that is so, but to people not familiar with the intricacies of the cardnality of infinite sets I can see why they think its a bit of handwaving.

If you want to be pedantic, the reason that 1 = 0.9999999999… is because 1 is the least upper bound of the sequence 1 - 10^-k and in analysis, we have defined a number to be equal to the LUB of its sequence but I always had problems with that sort of definition, not the least of which is the fact that (as I showed above), lim [x->a] f(x) = n does not necessarily mean f(a) = n.

Do I believe that 1 = 0.999999…? Yes, but I can see where the current rationales or proofs leave something to be desired and at a certain point is HIGHLY dependent on definitions to cover up a gap when working with the numbers.

That’s basically how limits were defined in my high school calculus course.

I wanted to come back to this, not because of the discussion over why Base 10 beat out Bases 5, 8, 12, 20 or 60. I wanted to come back to it because it makes a fundamental error in understanding what it is that has the OP confused.

The OP is not confused about the meaning of 1/3. All your discussion of using different bases does is show that “1/3” in Base 10 has different representations, depending upon what base you use for your numbering system. But all that does is help him understand that we can prove that 1/3 + 1/3 + 1/3 = 1, and that’s not his issue at all.

No matter what base you use, there is going to be some number expressable through the concept of infinite repetition of a digit, or some string of digits. Nor is the OP really confused about what “equal to” means, though he bit on that red herring rather quickly when it was placed in front of him. One might ask him what he thinks of the number .142857142857… as a representation of 1/7! After all, if we add those up 7 times, we’ll get that same .9999~ that he is so enraptured by. So your discussion of bases is of no help to our OP friend, as it is totally off topic.

No the real issue is highlighted here:

What the OP isn’t understanding is the meaning of .9999999~. He thinks that, since there is some difference at every decimal place between the “9” and the “0” in 1.0000~, that the two numbers must be different. You can see this in his statment that .9999~ “will never actually reach 1.” It doesn’t have to “reach” it because it is already there! :stuck_out_tongue: .99999~ isn’t some “process” of writing out the digit “9” infinitely many times. .9999~ is a decimal respresentation of the same concept as the digit “1” in Base 10. It only looks that way because it’s what happens if certain types of arithmetic are used to create representations of certain other numbers, like, for example, 1/3 = .3333~.

But the OP remains unconvinced, perhaps, that this is true, that .99999~ is the same as “1”. It was here that I introduced the concept not of “infinitisemals”, but rather the concept that, for the numbers to be different, we would have to accept that there exists between “1” and “.9999~” some defineable distance (and, thus, on the real number line, the existence of an infinite amount of intervening numbers). But, of course, we cannot find any such number, or establish any such distance. If we attempt to subtract .999999~ from 1, we get “0”. If we attempt to find some place at which we can insert some other set of digits, we find that the "9"s squeeze us out indefinitely. Thus, what appears to be true (.9999~ = 1) is, indeed, actually true.

We don’t even need to play with an infinite series to show this (though the lengthy version of what I’m about to do could use one):

If what we take GD saying is true, then 1 - .[some amount of zeroes]1 will equal .999999~, correct? Since the 9s continue to infinity, the zeroes before the 1 must also. In other words, the above expression:

.999[…]99~ = 1 - .00[…]01

can be rewritten as

.999[…]99~ = 1 - 10^(-n)

Where n is however many 9s we have + 1 (er, I think, it may just be the number of 9s, I’m not feeling too on the ball today, doesn’t matter in the end). Of course, we have infinite 9s so we have to take the limit of the expression

.999[…]99~ = 1 - limn->infinity
Of course, that limit evaluates to zero, so that reduces to:

.999[…]99~ = 1 - 0 => .999[…]99~ = 1

Of course, now the sufficiently steadfast can attempt to argue that the limit doesn’t actually equal zero, but we can play proof vs conjecture all day and they can always move the goalposts.