.999 = 1?

Well, not a geometric series, merely an infinite one.

The problem is that dealing with infinity often clashes with intuition. Cantor showed that one or the other had to go, and since infinity was a useful notion, intuition got the boot.

Here, we’re asking how to give meaning to an infinite string of digits. Well, how do we give meaning to a finite string of digits? The answer is by treating them as fractions, in the following manner:

167.8945 = 1∙10[sup]2[/sup] + 6∙10[sup]1[/sup] + 7∙10[sup]0[/sup] +8∙10[sup]-1[/sup] + 9∙10[sup]-2[/sup] + 4∙10[sup]-3[/sup] + 5∙10[sup]-4[/sup]

If you’re content to deal with only rational numbers, you can extend to an infinite string without worrying about limits, because each rational number will produce a repeating sequence or terminate after some point. However, if you wish to address a number like √2 or [symbol]p[/symbol], more work is necessary.

This raises the question of how we can treat it. In the case of √2, we can easily find a sequence of numbers approaching √2 from either side. For example, since 1[sup]2[/sup] < 2, 1.4[sup]2[/sup] < 2, 1.41[sup]2[/sup] < 2, etc., we can define a sequence of finite decimals 1, 1.4, 1.41, 1.414,… each of which is smaller than √2. Similarly, each of 2, 1.5, 1.42, 1.415,… is larger than √2. This implies that the “correct” expression should look like 1.41421…

How can we interpret this? Well, let’s try the same way as earlier, so √2 = 1∙10[sup]0[/sup] + 4∙10[sup]-1[/sup] + 1∙10[sup]-2[/sup] +4∙10[sup]-3[/sup] + 2∙10[sup]-4[/sup] + 1∙10[sup]-5[/sup] + …
Now, we just add them up, right? Not so fast. Addition as defined for rational numbers is a “binary operation,” which means that it takes in two numbers, and spits one back out. By repeating the process multiple times, associativity tells us we can add any finite amount of numbers. But there are infinitely many numbers to add in the expansion above. Because of this, we must define a new meaning for addition, one which allows us to add an infinite amount of something. This is easier said than done.

For example, we can add any finite number of ones: 1 + 1 + 1 + … + 1. If we try to carry on this sequence forever 1+ 1+ 1+ 1 + 1… we clearly run into difficulty, since this is larger than any number it is possible to conceive of.

Well, what’s the difference from our earlier sequence? Well, one obvious difference is that the summands in the expansion of √2 each got closer and closer to zero. So let’s just agree to throw out any infinite sum where that doesn’t happen. That’s not quite enough, since 1 + 1/2 + 1/3 + 1/4 + 1/5 + 1/6 + … actually runs into the same problem as with the infinite sum of 1s. How can we differentiate between this and the expansion of √2? This is a lot trickier. The most common method was established in the middle of the 19th century. This was the introduction of limits. Only then was the question of what a infinite decimal expansion actually meant made into a well posed question.

Hey Senegoid,

Thanks for all your response. Your obviously a very intelligent person. You don’t have to talk down to me like that. I have an MS in Computer Science, also studied physics for quite some time and have always had an avid interest in all the sciences including math. I know perfectly well intuition is always right. It’s one of my favorite things about the Einsteins theory’s of Relativity.

I lost sight, or even perhaps never totally internalized the modern definition of decimal notation relying on limits. I did not see how .999… necessarily equals 1. I wasn’t arguing simply because it was not intuitive to me, I had read some of the proofs given here and others and just still wasn’t convinced. But if the very definition of rational numbers includes this more or less, then it is silly to argue.

So are you trying to beat into my head that we should all just accept things without question?

I see that President Gentle wrote just about the same essay that I just did, about defining a new meaning for addition of infinite series, and at just the same time!

GMTA!

Oh yes I do! The Devil makes me do it!

Can you prove that?

I have a few very intelligent people too!

:confused:

So what I gather from PJG’s explanation and from some other articles is that the definition we give to rational numbers in some way elegantly helps us talk about and use irrational numbers. Before this in the mid 1900’s many mathematicians used used infinitesimals, but they were “messy” or “troublesome”? With the introduction of Limits this somehow provided a not only a consistent way of describing rationals and irrationals… it was elegant. Now, one I suppose can argue the relative elegance of the limit definition… I will not! Perhaps one can at least say vastly more elegant than infinitesimals. I have also read somewhere that part of the move from infinitesimals to limits was motivated greatly by the general dislike of the very vague notion of an infinitesimal. I certainly won’t argue that either, but at some level in the back of my mind I had this formulation that .999… = 1 was in some essence the preference to limits over infinitesimals, and perhaps not a necessary one but a preferred or more convenient one. To some degree this idea sticks with me, but I guess I need to further understand these issues.

I know I can be irritatingly stubborn.

I once had a 2 week long discussion of the chicken or egg question with a biologist. We were both arguing from an evolutionary perspective, but yet could not agree. He said chicken, I egg. It was an interesting discussion and in the end I think we agreed the question is not well defined enough to say exactly what is a chicken and what is an egg to answer. It is interesting to me to see how these things often come down to semantics or how you define things. Which in many cases can be somewhat arbitrary.

Vagueness is exactly it. The problem with infinitesimals is that they weren’t given a rigorous grounding until Robinson created non-standard analysis in the 1960s.

Infinitesimals worked fine for Newton and Leibniz. For example, the argument for giving the derivative of y=x[sup]2[/sup] is fairly elegant and standard:

If x increases by some non zero amount [symbol]D[/symbol]x, then y increases by the difference between x[sup]2[/sup] and (x + [symbol]D[/symbol]x) [sup]2[/sup]. This is [symbol]D[/symbol]y = (x[sup]2[/sup] + 2x[symbol]D[/symbol]x + ([symbol]D[/symbol]x)[sup]2[/sup]) - x[sup]2[/sup] = 2x[symbol]D[/symbol]x + ([symbol]D[/symbol]x)[sup]2[/sup] = [symbol]D[/symbol]x(2x + [symbol]D[/symbol]x).

This means that the slope is [symbol]D[/symbol]y/[symbol]D[/symbol]x = 2x + [symbol]D[/symbol]x. But this implicitly uses the fact that [symbol]D[/symbol]x is nonzero, or else we couldn’t divide by it.

Now, in order to find the derivative, Leibniz argued that [symbol]D[/symbol]x could be replaced by zero, so the derivative is the expected 2x. The problem is that the process required [symbol]D[/symbol]x to be simultaneously zero and nonzero. There is obviously no such number. Mathematicians tried to hold on to this formulation for a century, but it became more and more difficult as calculus evolved into multiple dimensions and the complex numbers.

Cauchy introduced the limit as an attempt to introduce rigor into the foundations of calculus, and other mathematicians picked up the ball and ran with it. The introduction of rigor allows the formulation of questions to ask that couldn’t be conceived prior. It is certainly possible to put infinitesimals on a similar footing, and in fact, as mentioned above, they were in the 1960s. However, this required the introduction of a new number system - the hyperreals. There’s been at least one introductory calculus textbook centering on this system to the exclusion of limits, but IMO, students actually have an easier time understanding the algebra of limits than they do the algebra of the hyperreals.

Long story short: infinitesimal arguments are fine, but the historic arguments lack the rigor that is expected in modern (post-1900) mathematical work. (I’ll leave aside for the moment the issue as to why such rigor would be desired.)

I missed this in my last post. This is almost exactly the point - in mathematics, definitions are everything. It’s not so much an issue of semantics as it is being impossible to pose some mathematical questions without a firm definition.

Mathematically, it’s not so arbitrary as asking whether your definition behaves in all the expected ways. So a working definition of an infinite sum should be associative and commutative, if at all possible. Also, it would be nice if the definition agrees with other accepted cases. As I mentioned earlier, we don’t need limits to deal with rational numbers. However, once we have come up with a workable definition for an infinite decimal expansion, it had better give the expected results when applied to rational numbers.

Now, there might be multiple ways to do this. In that case, mathematicians would explore any such way which is discovered. In this event, it could be possible to come up with multiple extensions of the same system, which don’t necessarily give the same mathematical object. For example, Euclid developed a consistent geometry using five postulates. However, if we replace the parallel postulate with something else, we can get a consistent geometry which differs from Euclidean geometry in interesting ways. For instance, in hyperbolic geometry, the area of a triangle entirely depends on the measures of its angles (and can never exceed [symbol]p[/symbol] in the most common models).

Indeed… mathematics is the last place I would like to see any arbitrary definitions. I may not be a mathematician. But I hold it very dear to me. It seems so pure an uncontaminated by the messiness of the “real” world, it lies outside of it in a Plutonian sense I think. It in theory it could exist on it’s own whatever that would mean exactly… but the rest of the world could not exist without it. It dictates the framework in which everything must occur. At least this is the way I view it.

In this regard, Limits may be the best we have to build on… yet something about their definition does not yet seem elegant to me enough. As pointed out to me earlier, (unnecessarily I might add), I know we can’t always trust our intuitions on these matters, but something about it just doesn’t sit right with me and never has. It seems a little bit forced. It’s like our “normal” rules of math break down at the Limit, and we can intuitively see what the answer should be, but have to add in these extra rules regarding limits to arrive at an answer. To me this strikes of something still missing in our understanding, rather than Limits being a fundamental Plutonian construct of math. Maybe its just me.

Going back a bit to the idea of a “proof,” I sometimes view it as a legal argument.

Suppose that .9999… < 1. Well, what is the difference? Where is the difference? Explicitly, it isn’t in the first four decimal places. Is it in the fifth decimal place? No, because I can easily extend the description to .99999… Okay, is it in the 50th decimal place? No, because (with more effort than I care to expend) I can extend the description to 50 places.

Any time you say, it is in the “nth” decimal place, I refute this by extending the description to n+1 places.

In “legalistic” terms, I’ve got you beat. You can’t describe the difference, because any time you try to state it, I can eliminate it.

This has led some people to “constructive mathematics,” where a number isn’t actually “a number” unless it can be constructed. Infinite expansions cannot be constructed (because I don’t have “forever” in which to write down more 9’s.) There is a counting number n1 and a counting number n2 such that n1/n2 is close enough to pi for any useful purpose. (Can any real-world engineer possibly require knowing pi to more than 20 decimal places?)

Hmm, perhaps the word you’re going for is platonic?

[quote=“Trinopus, post:153, topic:27517”]

Going back a bit to the idea of a “proof,” I sometimes view it as a legal argument.

Any time you say, it is in the “nth” decimal place, I refute this by extending the description to n+1 places.

In “legalistic” terms, I’ve got you beat. You can’t describe the difference, because any time you try to state it, I can eliminate it.

People will hate me for this…, but okay what if I want to say there is no “space” between them, they are literally “touching”, while not occupying the same space/place on the number line.

An obvious response to this would be, well if there is no difference between A and B, then A - B = 0 and therefore A = B.

But what if I were to say well, actually A - B = 1/infinity? Perhaps there is a subtle distinction between say there is no space between 2 things yet they are not in the same space. But points have no dimension, right? So that leads us back to they must be in the same space again. What if I argue points do have a dimension? Say 1/infinity? Perhaps this is a number we don’t fully understand (a bit of a planks constant of the math world?). I don’t know, probably leads to contradictions. I’m sure many great mathematicians have already been down this road and reported the awful conditions. But I haven’t and I find it interesting to toy with these ideas.

Yes, thank you. LOL don’t know why I said plutonian.

That depends on how you choose to interpret decimal notation into the hyperreals. You could interpret a.bcd… as the hyperreal corresponding to <a, a.b, a.bc, a.bcd, …> (i.e., as “a.bcd… truncated to the omega-th decimal place”, where omega is the canonical hypernatural <0, 1, 2, …> which generates the whole structure). Then 0.999… would be different from 1; their difference would be 1/10^omega (i.e., “0.000…1, with omega many 0s before the 1”). And this is probably a fair formalization of the intuition which erik150x is, in their own way, attempting to express and explore.

Of course, this wouldn’t be the usual way of interpreting infinite decimal notation. But it’s a not unsensible one, for that context and erik150x’s purposes.

My 2 cents, and 2 tenths…
I work as a machinist. If I’m machining a feature, say a slot width, and the print
calls for the slot to be .999" +/- .0002, and I make the slot 1 inch wide, guess where
the part goes? In the scrap bin. Why? Because .999 does not equal 1.

It’s not just you. Some of the greatest mathematicians of the 19th century felt the same way.

And you have a very good point. There’s a reason that infinitesimals stuck around for so long, despite their lack of rigor. They captured the intuition of the mathematicians using them. However, they proved to be stifling in the long run.

Believe it or not, the limit definition makes intuitive sense for working mathematicians. It does an excellent job of encapsulating everything that ought to be true about the real numbers. But more importantly, it generalizes to other situations in a very simple fashion. Limits make sense wherever we have a geometric situation with a concept of “nearness.” In fact, in the most abstract situation, the definition is actually much simpler. It’s only because we add so much more to the geometry in specializing to the real number system that the definition becomes more complex. But since the abstract definition is so intuitively nice, there’s not much in the way of worry about the general concept of limits. There is worry about teaching limits however, since the formal [symbol]e[/symbol]-[symbol]d[/symbol] definition of a limit is extremely daunting to a freshman calculus student.

The title of the thread is unfortunate here. No one is arguing that 0.999 = 1, but that 0.999999999… = 1, where the sequence of 9s never stops.