An Infinite Question

Oh, I wager most people who don’t already understand that 1 = 0.9~ wouldn’t agree that 1/3 = 0.3~ either. But, as you said, the problem is understanding the meaning of the notation. One can give proofs using assumed principles implicit in the design of the notation (e.g., the 10 * 0.9~ = 9.9~ = 9 + 0.9~ argument), but those principles may just as well be disputed by those who do not yet understand the meaning of the notation. Ultimately, at the bottom, there’s nothing for it but to actually provide the definition of how the notation is to be interpreted (as in my post #48 above, which provides such a definition stated in terms of simple inequalities and without any need of such general technical concepts as “limits”), beyond which hardly any proof of anything is required.

Actually, Indistinguishable, most high school students have no problem with saying that 1/3 = 0.3~ . After all, that’s what their little calculators sort of tell them, and they’re not so dense as to not understand that the result of dividing 1.0 by 3 keeps going and going and going.

The difference between 1/3 = 0.3~ and 1 = 0.9~ is that in the first case, there is nothing to compare the .3~ to that makes it look “off” or “different.” In the second case, since they cannot get a calculator to reproduce .9~ through any comparable fraction, and since it looks different, they assume that there must be some difference that isn’t being accounted for. After all, they usually understand that the fact the calculator insists that 3 * 1/3 = .99999999 on cheap calculators is because it cannot handle the actual value for 1/3, and since their fancy TI-83s give that as 1, they assume that an answer like 0.9~ must simply be a funky, incorrect result in the same way the cheap calculator is incorrect.

I think that’s right: for most numbers there is only one decimal-fraction representation, and so there’s no apparent paradox. However, for terminating decimals (other than 0, which as usual is a special case), there are two representations, one ending with an infinite number of zeroes, and the other ending with an infinite number of nines. It’s just that it’s not usual to come across something like 1/4 = 0.24999999…: people only come across the simplest case, which is that 1 = 0.999999…

(And since people usually work with decimal notation, they don’t come across issues like 1 = 0.111111… in binary, or = 0.222222… in base-3 notation.)

Ah, good point. Change my “most” to “many”, though, since I still think “Well, 0.3~ isn’t equal to 1/3, it’s just under it, but never reaches it” is a point made by many 0.9~ = 1 deniers in many of these discussions.

Division by zero is undefined as long as you accept the field axioms (in the sense of “field” meaning “commutative ring with identity and inverse”). There are mathematical systems that do not include all the field axioms, but plain vanilla arithmetic is not one of them. As nearly as I can recall, they are all taught or tacitly assumed by 4th grade or so (US 4th grade corresponds to roughly nine years of age). If we are not discussing plain vanilla arithmetic, then of course we must define just what it is that we are discussing.

OK, wading in where angels fear to tread again.

Another way of looking at the issue, is to go back to the question: why do these infinite length representations exist anyway?

The answer lies in the unique prime factorisation theorem. Sometimes known as the fundamental theory of arithimatic. Simply put, for any natural number you can select, there is ony one set of factors of that number that are all prime. So 20 = 225 42 = 337 and most importantly for this discussion, 10 = 2*5 and 3 = 3.

It is never possible to come up with an alternate set of prime factors for any natural number.

If you want to express a fraction (or rational number) in decimal form one of two things will happen. Either the demoninator will have prime factors which are all 2 and 5, or there will be some other prime in the denominator’s factors. If the only prime factors are 2 and 5, you can represent the number as a terminating sequence. Any other number and the representation is not terminating. Any.

So, 1/4 = 1/(22) = 0.25 Note that this is exact.
6/32 = (3
3)/(22222) = 0.1875 and again, this is exact.

But, say, 4/7 = 2*2/7 = 0.571428571428571428571428571428…
and note the reccurring sequence 571428. Another thread pointed out the notation 0.(571428) to denote this. (When I was at school we used a supercript dot.) The parentheses are easier to read.

Now, note that 2 and 5 are important only because they are the factors of the base we are using for the representation (decimal). We could easily represent the number in a different base. Base 2, aka binary notation, we only have one prime factor - 2. The only fractions that can be represented in a terminating form are those where the denominator is a power of 2.

So, in base 2, 1/4 = 1/(2*2) = 0.01 Exactly.
1/8 = 0.001, and 3/8 = 0.011. All exact.
But 1/5 is 0.001100110011001100110011001100110011001100110011001101… or 0.(0011)

How about base 3 then?

1/3 = 0.1 Exactly.
2/3 = 0.2 Also exactly.
1/3 + 2/3 = 0.1 + 0.2 = 1

1/2 in base 3 doesn’t terminate - since 3 has only one prime factor: 3.
1/2 in base 3 is 0.111111111111111111111111111111111… = 0.(1)

Now how about that!!! Guess what is going to happen?

0.111111111111111111111111111111111… + 0.111111111111111111111111111111111
= 0.(1) + 0.(1) = 0.(2)
= 1/2 + 1/2 = 0.2222222222222222222222222… = 1
in base 3. Look familiar?

It is also worth noting that in base 2:
0.111111111… = 0.(1) = 1. Which is identical to the expression for the sum of the infinite series 1/2 + 1/4 + 1/8 …

Which brings us full circle. 0.9999999… = 0.(9) is not a process. It is a notation for a sum of an infinite sequence. No different to any of the other infinite sequences above. If you mess about changing bases you can make any fraction drop in and out of having a terminating representation. You didn’t change a notation into a process or back again. (Not unless you define 1.(0) as a process too. Which means you define all numbers as an infinite process. Which we don’t. Or to put in the obverse: 0.999999… is just as much a process as 1.00000… - neither are.)

Since exactness is what this question is all about, I have to point out that just because successive terms get smaller and smaller doesn’t mean that the series converges to a limit. The simple example is SUM(1/x), as x goes from 1 to infinity. The terms get smaller and smaller but the sum does not approach a limit - it keeps increasing without bound.

ultrafilter is saying differences between arbitrarily far apart terms eventually get arbitrarily small (in the sense that, for any positive epsilon, there is some point such that any two terms beyond that point have absolute difference less than epsilon), which is sufficient (and, for that matter, necessary) for convergence of the sequence (the Cauchy condition).

The part in bold is necessary, but not sufficient. Consider the terms of the harmonic series, the difference between successive pairs of partial sums approaches zero, but the series diverges.

It looks as though you were attempting to define a Cauchy sequence, but that is not the definition.

As has been stated – it’s really about understanding the notation.

Look at it this way. We know that pi has an exact value. We just can’t denote that value using numerals. So instead we notate it using the pi symbol. We can approximate pi by writing 3.14 or 3.1415. Some folks have approximated pi out to thousands and thousands of decimal places. But that’s still an approximation. However, when you use the symbol for pi, you are representing the EXACT value that is not otherwise expressable.

Same thing here – we know what the exact value of 1/3 is, we just can’t express it using numerals. So we use the symbol ~ instead. We can approximate by saying .33 or .3333 or write out a hundred thousand 3’s, but that is still just an approximation. We use the symbol .33~ to represent the EXACT value of 1/3.

And since 3 x 1/3 equals one, so does 3 x .33~ = 1, and so on.

For fun, I took the Windows calculator on my computer (scientific format).

I entered “3”, then hit the “1/x” key, and get 0.333333333333333333 for some really long string.

Then I hit “1/x” again, and got 1.

Then I reentered 3 and 1/x, then multiplied that by 3 and got 1.

Then I manually entered some incredibly long string (40 or 50) of 0.333… - longer than the field of display. When I hit the 1/x key, it gave me 0.99999… filling the display. Same thing if I manually enter the string 0.333… and then multiply by 3.

So the calculator knows that when I computed 1/3 using the 1/x key, that value is 1/3 and not just filling the buffer with that quantity of 0.333… But trying to manually enter the value, it treats it as a terminating decimal.

Giles said:

Interesting observation, and yes, there is no conceptual reason one couldn’t write 1/4 as 0.2499(9). In practice, though, there’s no reason to ever encounter that situation, and no practical reason to do so, and a very practical reason to not write it that way. 0.25 is simpler.

But there really isn’t a frequent occurance of encountering the situation even with 0.9(9). It only comes up for most people as a math puzzle that challenges their understanding of what it means to be a repeating decimal.

Surely you should get something close to 3 here?

Again, surely you should get something close to 3 here (if anything, slightly higher)?

It’s a fair cop and a nice counterexample. If I ever teach an analysis class, I’ll use it.

Ah, I see now ultrafilter had used the word “successive”. Had he not, he’d have the wiggle room to claim he was informally saying the right thing. :slight_smile:

Why are we using finite intervals? We need only state that 0.999~ is infinitely close to 1. I remember this from college math, an embarrassing long time ago. Surely someone out there can whip out the differential equation to explain this. I seem to remember a certain someone’s theory, I’m quessing “The Fundimental Theory of Calculus” by Sir Issac Newton, 16th, 17th Century AD.

limit as x->3 of x/3 = 1 (dx=0) can be proven in a vector space. May not work with nested algebras, rings, shames, or SDMB equations.

Simple for the young, I guess.

– Russ and it’s pouring rain here

And what in the name of Tycho M. Bass is that supposed to accomplish by way of explaining the issue to someone who is almost certain to be a calculus virgin?

I’m with you there! I took a few hardware classes when I was getting my CE degree, and doing binary-coded decimal was always one of those little things that annoyed me. I’m currently writing a science fiction story involving a race of aliens; one of them is an engineer, and I decided to give them eight fingers specifically so that she wouldn’t have to deal with BCD. (Or binary-coded duodecimal, or some other equally horrifying proposition.)

I’ve been doing a bit of research on numeric representation systems while trying to come up with an alien one, and I also happened across that bit about the Native American octal system (using the spaces between the fingers) that you mentioned. Definitely interesting, but there’s just something perverse about a number system in which you don’t even have enough hands to count your own fingers. :smiley:

For real utility base 12 is hard to beat. Divisible by 2,3,4,6. Or, as the Babylonians understood, and our clocks, 60. Divisible by 2,3,4,5,6,9,10,12,15,20,30. 12 would have been so much better than 10. Any technology that requires its users to count binary on their fingers is only one step past banging the rocks together. We have machines that do that. That is the point of them.

There is a brand of evil chocolate biscuit here in Oz, the TimTam. It comes in boxes of 11. A prime number. Sharing a box, unless you have eleven people is impossible. I feel sure this is intentional. So for sheer dreadfulness a number system based upon a prime number is difficult to better.

Thanx for expressing the differing bases point of view. I think it cements the concept of 0.(9) = 1

Do you also remember that we in the modern world don’t use Newton’s calculus? Rather, our calculus is descended from Leibniz. Nor we do we use locutions like “0.999~ is infinitely close to 1” which doesn’t have any meaning in standard math. Leibniz did say such things but ever since Cantor mathematicians have handled infinite quantities in a more rigorous manner. Leibniz used infinitesimals, but would have preferred our way of thinking.

Limits are the answer.