.999 = 1?

In terms of the real numbers and the standard interpretation of decimal strings into them, both 0.999… and 1.000… are representations of the same quantity, 1. One wouldn’t normally say either is any worse or less precise or less legitimate than the other. An algorithm which produces 0.999… is just as legitimate as any other. The decimal system comes with no claims that every real is given a unique representation; those which are multiples of powers of 10 have two representations, and the rest have one.

In a sense, this is topologically unavoidable: any continuous map from sequences of digits onto the real numbers must involve some doubling-up, or else there would be a discontinuity at the digit-changing border.

We can see the shadow of this phenomenon in another way: it’s not actually possible to totally computably perform arithmetic on infinite decimal strings, in the normal sense in which this claim would be interpreted, specifically because the computation will get hung up at the border where a digit changes its values, which is to say, in the space of ambiguity between the D999… and (D + 1)000… representations.

For example, consider the problem of adding a string starting with 0.444… with lots of 4s (not known ahead of time to be infinitely many, just observed so far to be a lot) to a string starting with 0.5555… with lots of 0s. If there ever comes a point where, say, the two strings suddenly simultaneously spit out 99, the result will be greater than 1. If there ever comes a point where, say, both strings spit out 00, then the result will be less than 1. But so long as you keep seeing 4s on the one and 5s on the other, you can’t rule out either of these scenarios as future possibilities. So you can’t, in finite time, decide even the first digit of your output (whether it should be 0 or 1).

[Other representations of infinite precision real arithmetic get around this problem (as the computable analogue of Dedekind cuts, or as Cauchy sequences with more freedom in representation); it’s just that decimal strings with the obligation to carry are actually horrendously awkward for defining arithmetic]

All this addresses the standard mathematical interpretation. As always, there other ways of interpreting the notation, such that 0.999… might be interpreted to denote some quantity actually strictly smaller than 1. But this would not be the standard.

Thank you Indistinquishable. As sometimes happens, it will take me a while to comprehend your answer :slight_smile:

No problem.

…Well, one problem:

Er, lots of 5s, rather.

BTW: Imprecise is the wrong word there. What I’m referring to would be the expression “2/2=.999…”. Using the notation .999… though accurate, is … well I’m not sure what. Just unclear in some way.

If you allow this to be a generally legal operation, you can show that the standard geometric sum works even for divergent series:

S = a + ax + ax^2 + ax^3 + ax^4 + …
S = a + x(a + ax + ax^2 + ax^3 + ax^4 + …)
S = a + xS
(S - Sx) = a
S = a/(1-x)

So, series like 1-1+1-1+… or 1+2+4+8+… can be summed with this method. Doesn’t work for 1+1+1+1+…, though.

To a mathematician, it’s perfectly clear. It’s just an instance of a value having multiple representations (much like 2/2 = 3/3 = 4/4; if values didn’t have multiple representations, it would be pointless to talk about equations in the first place :))

I agree, it can be surprising to a non-mathematician. It leads to threads like this one. But there are good reasons to use this convention regardless. It would be a pain, and largely pointless, to add in a special rule saying “Hey, any time you might produce an infinite trail of 9s, you don’t get to call that a well-formed decimal; instead, you have to specially check for it and do an ‘infinitary carry’ into a trail of 0s instead.” The moment you proposed such a rule, everyone else would say “Hey, wouldn’t it be convenient if we just let such things count as well-formed decimals already, meaning the same thing as what this rule would have us rewrite them as anyway?”. And then we’d end up back here.

It occurred to me that my comment (about infinite series) may have been too oblique.

erik150x has been trying to show there is a last place in the series “0.9~”. He has been doing so using the infinite series like [0,1]; a series which does have a last place.

Infinite series with first and last places are fine. But not all infinite series are the same. For example [0,1); 1,2,… (the series of natural numbers); are examples of infinite series without last members. Thus one problem here has been trying to model an infinite series without first and last points with a series that does have such.

Yup. And in the same way, any form of series defined by a linear recurrence relation (with no nonzero constant solutions) would evaluate to match the closed-form rational expression (with no pole at 0 or 1) whose Taylor series’ coefficients it matches, regardless of convergence in the standard sense.

So, for example, the general Fibonacci series whose first two values are a and b would sum to (a + bx)/(1 - x - x^2) evaluated at x = 1, which is to say, -b. In particular, letting the sum of the standard Fibonacci series be F, we would have



F         = 0 + 1 + 1 + 2 + 3 + 5 + ...
F - 0     = 1 + 1 + 2 + 3 + 5 + 8 + ...
F + F - 0 = 1 + 2 + 3 + 5 + 8 + 13 + ... = F - 0 - 1


Thus, the Fibonacci series sums to -1 (on this account, while of course diverging to infinity on the standard account).

With the caveat that I’m not a mathematician, I think it’s clear and precise enough, but pointless and potentially confusing to the reader. If I saw somebody write 2/2=.999… I’d be looking through the proof frantically searching for why the hell that particular representation of the number was chosen. You have to admit, despite its clarity, it would be pretty damn odd for somebody to not use one of the common canonical representations in the general case.

Oh, sure. It would also be pretty odd to write “1.000000…” or “3[sup]2[/sup] - 2[sup]3[/sup]” without some particular reason. Most of the time, you’d just write “1”. It would even be odd for a mathematician to write “0.3333…” where “1/3” would do.

The point of the infinite decimal representations is not to always use them (they are actually quite unwieldy for many, dare I say most, of the things most mathematicians are interested in); the point is only that they are theoretically there for those cases in which you might want to analyze things in terms of infinite decimal representations, even if you wouldn’t normally notate those same values in terms of those particular representations.

One also needs a lot of notation to represent each of the reals. (And infinite decimal notation gives you a lot.)

Yes, although any particular real you are actually interested in, you usually have some reason to be interested in which serves as better notation than describing its decimal expansion. For example, I could try to convey to you my interest in the particular number “1.41421356237…”, but unless you already were familiar with this patch of decimal-land, this particular notation would do little to convey the idea I had in my head; it has the right extension, but not the right intension. Rather, a much more direct expression of the concept I have in mind is “square root of two”. In fact, I don’t actually have any means of expressing an infinite decimal except through some indirect finite description anyway. It is only notation in a theoretical sense.

Of course, the concept of infinite decimal notation exists for a reason, and those reasons are along the lines of your post, mixed with a bit of historical caprice (e.g., the whole base ten thing). I don’t mean to deny that. I’m just pointlessly argumentative.

So nevermind all that. Good night; I’m off to bed.

[I suppose I may be missing the point; you may just be making an observation about cardinality. :)]

Indeed. I’m not sure how one could argue with my post, it is a simple truth. You need either denumerably many syntactic objects (i.e. “1”, “+”, “whatever”), or the ability (loosely speaking) to put them in strings denumerably long, to represent each and every real.

As luck would have it I really only care about even finitely many numbers at any one time, anyway. Be that as it may, there are going to be some numbers, relative to some notation, that cannot be described in finitely many symbols. (blah blah appropriate disclaimer about having only denumerably many symbols.)

Which means there are some reals you cannot express.

Yes. It was an observation about infinite decimal expansions and why we need them. It wasn’t meant to be anything other than an interesting observation that not everyone would be aware of.

Ah, but if I may be slightly more pointfully argumentative on my area of expertise, this assertion is ubiquitous, but doesn’t actually cohere. See, for example, Joel David Hamkins’ answer here.

Specifically, the argument that there are uncountably many reals goes like this: there is a fixed, definable procedure D such that, for any countable set of reals S, D(S) is a real not in S.

Therefore, applying D to the set of reals expressible in some countable language L, we find that D(L) is not expressible in L.

But clearly, anyone who can define L can define D(L). This argument can’t show that D(L) is undefinable simpliciter; it can only show that D(L) is undefinable for those who can’t define L.

If we were to apply this to our own language from within that language, it would produce the paradoxical result that we both can and cannot define D(our language) within our language.

The formal fallout is Tarski’s indefinability theorem: on the standard account of things, we can’t define the concept of “definability in L” in L. So we can’t even express the concept of “expressible (in our language)” in our language. We can’t hope to make the claim “There are reals which can’t be expressed in our language”, because the natural follow-up is “…for example, the one produced by diagonalizing against our language. Oh, wait, I’ve just expressed that, haven’t I?”.

All we can say on the standard account is that anyone who can express “expressibility in language L” is able to express something not expressible in language L (namely, “expressibility in language L”, which can also be coded up into a real)… but they must be doing this from within some other language. They can’t be consistently saying this of their own language.

(Now, we could hope to give a different account of things on which we can consistently define the concept of “expressible in our language” in our own language, but only with radical reformulation of our ambient perspective (particularly regarding the treatment of concepts like negation and implication); either way, the argument that we could say, of our own language, that we have established by Cantorian arguments that there is something inexpressible in that language, will not stand.)

(There is still the interesting observation of Cantor’s theorem; it’s just that that one line’s particular perspective on it doesn’t pan out, for subtle, underappreciated reasons. Everything else you said was, of course, correct.)

And blah blah Skolem’s paradox yada yada expressibility.

Let me dodge all of those issues and clarify. You said:

So your language is countable (assuming the right things about the number of syntactic objects). And make of that what you will with regard to what it can say. (Really, in context I was responding to a comment you made which, I took it, implied the countability of your language. The context was exactly that, comparing countable and uncountable sets. Make of that what you will with regards to expressability; it is non-trivial, I’ll give you that. I’m not sure it is fair to consider me as wanting to say anything too substantive, as opposed to just pointing out that on your description of your language it seemed countable; and again, that is an interesting fact about language. And it is interesting for the sorts of reasons you mention.)

And I’ll give you that I am unduly prickly and talk too much in these threads.

(The “Cantor’s proof shows there is a real I can’t express; specifically, the one obtained by diagonalizing against my language” thing is one of my biggest pet peeves(/the door to the most interesting fallout from that line of thought), so I leapt all over the opportunity to discuss it. In general, I feel like I’ve come across as in conflict with you in our sparse interactions in this thread, but I’ve only meant to build non-rancorously on what you’ve said. You’re alright by me, and kumbaya, and such.)

I submit you can express any real, you just have to use the right set of characters (I submit using the set of unicode characters):

“Y’know, those reals we can’t express with mathematic notation!”

ETA: For a specific one: “Y’know, that one?”

See? I just did it!