.999 = 1?

One thing that is far from clear is the definition of “0.000…1”

No, I don’t understand the notation you are using in that particular term.

I recommend you not use terms you cannot define. What’s the square root of blue?

One clean thing would be for the transfinite indices to not be well-ordered; rather, the indices would comprise some elementary extension of the semiring of natural numbers.

Then 0.000…5 + 0.000…5, with the 5 in the infinitieth place in each, would equal 0.000…10, where the 1 is in the (infinity - 1)th place and the last 0 is in the infiniteth place.

This is what we would get if we used the system outlined in this post.

(This would also involve not allowing arbitrary assignment of digits to transfinite indices (because then we’d be in a bind trying to compare pathological strings lexicographically, given the failure of the indices to be well-ordered), but, rather, a scheme under which arbitrary assignment of digits to finite indices induces a unique allowed extension of that assignment to transfinite indices, in such a controlled way that we can still do everything we want. But the machinery for this is simple. It’s all outlined in that post; in jargon, a free ultrapower construction)

I haven’t looked through the entire thread: has anyone made the obvious “.999… is Indistinguishable from 1” joke yet?

Ludovic: joke? If there’s a joke, please tell!

I sort of approached the issue by saying it can be treated as a “legalistic” argument, i.e., a case in a court of law.

If the other guy says, “Yes, it is distinguishable,” then I can demolish the claim by refuting any specific given statement of where and how it is distinguishable.

Our correspondent dodges this by saying it is distinguishable…but without saying how, where, or why. “It just is.” That isn’t going to be convincing to this particular jury!

Bravo! :cool: Encore! :smiley:

One of the main posters in this thread is Indistinguishable. :cool:

Indistinguishable from what?

Also, another minor fix:

We need to add the rules that L and R both have elements, to ensure finiteness.

(Apologies for the error. Damn these finite edit windows…)

I have made my argument. I will simplify because it seems to be too complex for some to comprehend. And really that’s my bad, things should stated as simply as possible.

Take a line segment which starts at point A and ends at point B

Let’s say the line is of length 1 in whatever units you like

Divide the segment in half. This marks the 1st decimal place with value .0

Take the right half and repeat the above, this marks the 2nd decimal place .00

Take the right half of this segment and repeat the above this marks the 3rd decimal place .000

The size of the segments we are creating can be represented by the very common infinite series often used in Zeno’s Paradox:

Sigma (i= 1 to infinity) [1/2^i]

or

1/2 + 1/4 + 1/8 + … + 1/infinity

Each half represents (maps to) a zero in the decimal expansion:

.000…1

except the last which we give the value 1.

Now I have tried to make no doubt about the fact that this series can and is completed all the time. This is the only way you can move from point A to point B. you must complete an infinite series of half distances to get from A to B. This is the heart of Zeno’s Paradox. Which I would like thank Indistinguishable for reminding me it can be solved without Limits. Once the time taken to complete traversing each half only takes half the time at constant speed, so traveling say 1 mile in 1 hour would go:

1/2 mile in 1/4 hour
1/4 mile in 1/4 hour
1/8 mile in 1/8 hour
.
.
.
1/infininity of mile in 1/infinity of an hour

1 mile in 1 hour

We know this is possible because we do it any time we move from any given point to another point. You want to say space is not infinitely divisible, (OK… prove it).

Both the decimal expansion of any repeating decimal is the very same infinity we are discussing here.

You can just state the contradiction that there is no infinite’th decimal place or that 1/infinity does not exist until you are blue in the face. But I have just told you exactly how it does and how to find it right at point B. So one of us must be wrong. I have told you how I am right. Stop stating contradictions “by definition”… they are you’re definitions and further more your definition is contradictory to what I have informally proved. You can say I just don’t understand infinity, well I can say the same to you. Show where my proof is wrong. Just saying I am wrong is no argument.

Let me pose this to you? Is the set of Real Numbers is larger than the set of Rationals, how can this be? If the set of Rationals is infinite, how can some other set of numbers be larger? But they are. Remember things aren’t always intuitive. You need to get past your closed mind on what infinity is and is not, or what infinitesimals are or are not.

Anybody here a fan of Donald Knuth’s book “Surreal Numbers?” He comes up with a completely new way to define numbers, and operations. And it’s in the form of a novel. (Martin Gardner said it was the first time a real mathematical idea had ever been published in the form of a novel.)

re 0.000…1 – I want to know if we are sure that 0.000…1 + 0.000…1 = 0.000…2.

How can we be sure the sum isn’t 0.000…11 or 0.000…1000001 or anything similar. Is the ellipsis a representation of a fixed, definite (and yet indefinite at the same time) number of decimal places?

After all, one of the “properties” of infinity is that infinity + 10 = infinity. If the “1” is in the “infinitieth” decimal place, how can we be sure it is in the same “infinitieth?” I’m afraid that it would be too easy to construct contradictory examples, and thus that the term is not well-defined.

Re: Exapno Mapcase’s one-act screen play:

Indeed! Standing ovation!

And now something horribly serious if that’s still possible here.

Several people (notably Trinopus I think) have been demanding some meaningful definition – ANY meaningful definition – of the strange notation 0.000000000…1 (with that 1 being in the infinity’th place of course, having a place value of 1/infinity or something like that).

Actually I proposed a sensible (I think) definition of such a notation (conceptually being 0.0000… but with 1 in the “last” place), quite some large finite number of posts ago. And… wait for it… sorry, but my proposed definition of this notation does not support anything that erik150x has been trying to say, but instead comes out to be just 0, not some arcane infinitesimal.

Just as 0.999… is defined as the limit of the sequence:
0.9
0.99
0.999 etc.
I proposed that 0.000…1 be defined (if we really really need to define this at all) as the limit of the sequence:
0.1
0.01
0.001
0.0001 etc.
Well… wait for it… that’s just plain 0. So I’m satisfied to say that .999… approaches 1 - 0.000…1 which is just 1 - 0 which is 1. I’m happy enough with that.

I wonder if erik150x is familiar with Thomae’s Function, aka the Popcorn Function, which has the strange property of being

[QUOTE=Wikipedia on Thomae’s Function]
. . . continuous at all irrational numbers and discontinuous at all rational numbers.
[/quote]

It’s like trying to argue that .999… is or isn’t equal to 1 at EVERY point!

[sub]Or is that only at every rational point? Or every irrational point? OMG, I get so confused![/sub]

That’s for sure!

I agree, the only real way to interpret the notation “.000…1” is:

lim (n->inf) 10[sup]-n[/sup]

This also has the benefit of also making everything else with that notation work nicely too

.999…1 = .999… + .000…1 = lim(n->inf) [sum (k=0->n) {.9 * .1[sup]k[/sup]} + 10[sup]-n[/sup]] = .999… + 0 = .999…

ETA: Though I guess then the question of what .999…0 is is still up in the air, lim(n->inf)[.999… + 0 * 10^-n] doesn’t seem to make much sense. Maybe lim (n->inf) [.999… + 90 * 10 ^-n]?

Cool. From the Wikipedia article, we see that Thomae’s function – almost a nonsensical object – “can be interpreted as a perspective drawing of Euclid’s orchard,” that latter object being almost sensical!

This is probably the critical problem. This is asserted without proof. Zero’s paradox, is a pardox because it contains a logical fallacy. It isn’t true, and does not require satisfying in order to move.

I would also commend The Hamster King’s post above. You have still not provided a useful definition of the notation 0.9999…0 As the post above shows, the notation is not consistent with the definition of ordinary arithmetic, so any proof using it clearly does no apply to the real numbers. The proof is internally inconsistent and thus fails.

Jragon and Francis Vaughan, how about the “limit of a sequence” definition I gave in post #492, which would go like this:

First note that the notation .999…0 , like others of the sort, isn’t already defined as any meaningful notation, leaving us free to give is any sensible definition we like, if a sensible definition can be found, without raising any contradictions. (If it did, that wouldn’t be a sensible definition, would it?)

Let .999999…0 be defined (and we will allow that the number of 9’s displayed and the number of dots displayed is immaterial) to be the limit of the sequence:
.90
.990
.9990
.99990 etc.
which seems like a perfectly fine approach to me.

What could go wrong?

ETA:
In general, if “x” is any one digit, and “y” is any one digit, and we wish to define:
.xxxxx…y (Example: .777777…4)
why not just follow this same approach?

Note that with examples like .0000…1 and .777777…4 the sequence approaches its limit from above! Does this bother anybody?

If you want to talk about 0.999…0, let’s say the result depends on how many 9s precede the 0. If n 9s precede the zero, then this is 1 - 10[sup]-n[/sup]. Then, plug in an infinite value for n.

If you live in a world where you pay attention to infinitesimal differences, the result will depend on which infinite value you plug in: if there is some particular infinite value designated “∞”, then it could be 1 - 10[sup]∞[/sup]… but it could also be 1 - 10[sup]∞ - 1[/sup], or 1 - 10[sup]∞[sup]2[/sup][/sup], or all kinds of different things. You would have to specify exactly how many infinitely many 9s preceded the 0 if you wanted to pin the result down. (Once you did so, of course, there would no longer be any ambiguity)

Alternatively, if you live in a world where you don’t pay attention to infinitesimal differences, different infinite values for n won’t make any difference, and the result will be identified with 1 no matter what (this is all that “taking the limit” amounts to; ignoring infinitesimal differences and identifying the result with whatever Dedekind cut it induces independently of whatever infinite input is plugged in).

And all of this would work coherently either way. Even if you didn’t ignore infinitesimals, you’d have that 0.999…1 = 0.999… + 0.000…1 whenever the number of 9s in 0.999…1 matches the number of 9s in 0.999… and matches the number of 0s in 0.000…1. This would work out for infinite lengths just the same as it does for finite lengths.

And if you did ignore infinitesimals, the equation would work out if either their lengths matched up appropriately or their lengths were all infinite; in the latter case, you’d get 1 = 1 + 0.

Sorry, I shall have to rethink this. It may approach its limit from below or above.

.77777…4 approaches .77777… from below.
.77777…8 approaches .77777… from above.

There’s nothing wrong with it, it’s just the sequence (if you pardon the notation, let x and y be a single-digit number, substituted for whichever decimal of the form .xxx…y you want)

a[sub]1[/sub]=.xy
a[sub]n[/sub] = a[sub]n-1[/sub]+10[sup]-n/sup+y*10[sup]-(n+1)[/sup]

It’s also ultimately basically the same as the sum + limit def I used, I don’t think there’s much of a problem with either definition. Both are pretty easy to expand to the general case (i.e. .xyzxyz…abc), though yours might be a bit harder to generalize to something like 3.14159…0.

ETA: Though mine does still have the problem with appending zero to the end, but that can be fixed easily with the (hilariously tautological) construction:

<infinite decimal> + lim(n->inf) 10^-n - lim(n->inf) 10^-n