.999 = 1?

Sorry to be coming into this late, but:

Actually, this is not a conventional formal definition of an infinite sum. If we were forced to use it, we would have to conclude that the value of any infinite sum is unknowable, since one can’t finish ever.

The more conventional definition of that notation is:

sum(i = 1 to “infinity”) [9/10^i] := lim (N->infinity) sum(i = 1 to N) [9/10^i]

which permits you to get a value for this sum in finite time, but forces you to use limits.

I’d take you one step further, and suggest that without limits, the notation .9999… doesn’t actually mean anything at all. Of course, using notation that doesn’t make sense without limits, and then rejecting the concept of limits after the fact, can lead to inconsistencies.

erik150x, I never mocked or made fun of you. The thing you need to know is that people using the conventional definitions of limits and “real numbers” aren’t wrong; they’re just using definitions that are in tune with what they are interested in. They are absolutely correct, according to their definitions, which is as correct as it gets in mathematics.

The only thing they are guilty of is persisting to use mathematical language in accordance with definitions which you are not that interested in. But this is alright; this is their prerogative. (And there are good reasons for them to be interested in those particular natural definitions; they present a convenient and useful system of calculation. But you seem to already accept this, which is great.)

On the other hand, what they need to understand is that you aren’t really wrong, either. For the most part, you’ve seemed to me not particularly crankish (there are those who are a lot worse). You’ve been focusing on a rather natural and useful idea, albeit you, as a non-mathematician, may have struggled to express or formalize it. You are also absolutely correct, according to your definitions, which is as correct as it gets in mathematics.

The only thing you are guilty of is persisting to use mathematical language in a non-conventional way. But this is alright; this is your prerogative. (And this is also often how mathematics progresses). So long as you are willing to acknowledge the non-conventionality of your use of the language and understand the conventional use and how it relates to yours, all is well.

This thread is full of people saying things that aren’t incorrect, except to the extent that they flatly proclaim that other people are incorrect.

[I would also note that arithmetic ignoring infinitesimals and arithmetic paying attention to infinitesimals are not disconnected, such that there is any sense in falling into one “camp” or the other; they are as closely related as modular and integer arithmetic, or the studies of linear and analytic functions. Both these systems shed light on each other. Both can be used to help analyze the other. The better we understand either, the better we understand both. There’s no reason either one should be ignored in favor of the other, except the vagaries of personal interests.

For example, epsilon-delta arguments are all about the passage between the world where we don’t and the world where we do ignore infinitesimal differences, even if they are not often presented this way. And understanding this fact will give you a better understanding of the foundations of calculus, make you sharper at recognizing how it relates to other topics in mathematics, etc. In mathematics, there is a tendency for knowledge to interconnect. It’s all good. You ignore this at your peril.]

Keep in mind that that’s what mathematics is. The only way anything is mathematically true is by assumption (either directly or indirectly). Every fact in mathematics is like the fact that bishops move diagonally: it’s true because, and only because, you made it a rule of the game, and having done so it is unquestionable, except insofar as you can use different rules if you’re interested in looking at some other game.

Nah; there is no hatred in mathematics. (Okay, yeah, we all hate “word problems.” But…)

Seems to me you’re trying to define a number that has only some of the properties of “zero” but not all of its properties. It has no width…but it has a little width.

But, as I argued, from a legalistic standpoint, you can never “win” in describing this number, because I just shrug and add another “0” in front of it. No matter how you describe it, I can always trump it.

I have a trivial mechanism by which I can “construct” counter-examples, but you have no means by which you can “construct” the number you’re looking for.

And, as I’ve been suggesting, if someone disagrees, you just ask them to point to the exact decimal number where this differs from “zero.” The tenth decimal place? The millionth? The googleplex-to-the-google-squared place? I simply smile, and say, “Plus one…”

Oh, right: we hate those, too!

I’m not sure I understand what you are getting at, Trinopus, but I will say, on the interpretation I gave above, 1 - 0.999… will differ from 0 in the “infinity"th place after the decimal point. And, yes, dividing this by 10 will produce an even smaller number, differing only in the “infinity” +1th place. And squaring that will produce an even smaller number, differing only in the “infinity”^2 + 2"infinity” + 1th place.

That’s alright. That’s just how this interpretation runs. It’s not the conventional interpretation of either the notation or the ambient numeric system, but it’s another one. It takes 0.999… as running to “infinity” many 9s, but not “infinity” + 1 many 9s. We can make sense of that, if that’s something we wish to make sense of.

Am I correct to assume that you still think there should be a 0 added to the “end” of the infinite string of 9s for the answer to 10x0.999…?

Or, if 0.999… is multiplied by 10, the infinite string of 9s will have one less 9, or at least a different “infinite” number of 9s to the left of the decimal point and thus cannot be used for later calculations like 9.999…-0.999… since the decimal places do not match and/or the “end” digit is uncertain?

If so, I would like to ask you if you can find the flaw(s) in the following (in which I try to see if 10x0.999… really does result in having a 0 at the end):

I start off assuming three premises:

  1. When two numbers have the exact equal infinite number of decimals places to the right of the decimal point (hereon denoted as [A], with [A-1] being “infinite decimal places-1” or some such), the two numbers can be used in calculations together since all the decimal places match.

  2. When two numbers with exact equal infinite number of decimal places to the right of the decimal point are added together, the result will also have the same infinite number of decimal places, since addition does not shift the decimal point in any way.
    x[A] + y[A] = z[A]

  3. A number multiplied by 10 is equal to ten of the number being added together.
    10x = x+x+x+x+x+x+x+x+x+x

((The above is non-rigorous on the definition of “number”, but please be kind in assuming what I meant.))

Anyway, let’s start with:

x= 0.9999…[A]

I calculate x+x, which I can do thanks to Premise 1.

Calculation is as follows (the * are for spacing purposes):

*0.9999…[A]
+0.9999…[A]


*0.
*1.8
*0.18
*0.018
*0.0018
*0.00018

  • etc.

*1.9999…[A]

Premise 2 applies on the result.

I now calculate (x+x)+x, also known as x+x+x.

Using the same calculation method above, the result is 2.9999…[A].

I repeat this until I have added up 9 of the x with the result of 8.9999…[A].

I now add the tenth x:

*8.9999…[A]
+0.9999…[A]


*8.
*1.8
*0.18
*0.018
*0.0018
*0.00018

  • etc.

*9.9999…[A]

Given Premise 3, 10x0.9999…[A] equals 9.9999…[A].

Wait, there has been no shift of the decimal point during any step, thus nowhere for the 0 to be added, and nowhere to add an “extra” 9 either. Where have I gone wrong?

Response to Monimonika:

.999...9
  • .999…9

1.999…8

  • .999…9

2.999…7

  • .999…9

3.999…6

  • .999…9

4.999…5

  • .999…9

5.999…4

  • .999…9

6.999…3

  • .999…9

7.999…2

  • .999…9

8.999…1

  • .999…9

9.999…0

But there is no last 9 in .999… (as usually defined). After every 9 in that decimal fraction there is still an infinite number of 9s to go.

Not sure why you think you can ingnore what happens at the “end” of this infinite string of 9s?

Granted even I have some uneasiness about either your version or my version. But in my mind, mine feels more correct to me. There is no getting around the fact we are talking about infintesimals here, and I have no perfect theory on them, maybe not even a good one. I remain to see any clear reason to say they don’t exists or except some ulitmate truth based on Limits whcih purposeful ignore them (with good reason), that .999… is really the same thing as 1 from our mast basic understaning of the idea…

9/10 + 9/100 + 9/1000 +… + 9/infinity

Doubtless I’m phrasing it poorly, but I’m taking my cue from epsilon-delta proofs, which begin, “For any epsilon > 0 there exists a delta > 0 such that…” I always viewed those as a “burden of proof” argument. If you give me an epsilon, I will construct a delta, which fits the criterion. As if in a court of law, you say, “Is it within one part in a million?” and I can say, “Yes it is, and here’s the delta that works.” “Is it within one part in a trillion?” “Yes, and here is the new delta that turns the trick.”

I never have to prove that it works “for all epsilon.” I only have to show that it works “for any given epsilon.” I have an algorithm that I use to construct my new delta from the epsilon someone else gives me.

Thus, with .9999… = 1. Tell me where you think the difference might be, and I will, upon the instant, show that, by simply putting in a few more 9’s, that the sum is closer to 1 than the difference you proffered.

Like two lawyers arguing in court.

Monimonika: I like it! By adding the numbers, you appear to have nicely gotten around the matter of “the extra decimal place.”

Giles your point is well taken. Niether my nor Monimonika’s version makes perfect sense. It is not a simple issue to deal with. But if the proof was as easy as Monimonika makes it out to be, it would have been made long ago. Poeple accused me of thinking I am smarter than all the great mathematicians in history. I certainly do not even compair my self… There is no proof I know of in a very elemantry manner describing a proof for .999… = 1. If there was it would be all over the interent. Everyone i have seen uses Limit based calculus. If your going to use limit based calculus then you have already made the assumption they are equal, so what’s the point of even considering the question.

Now… for me the problem with using limit based calculus as answer, is that doesn’t really say anything about HOW or WHY .999… = 1. Nothing to support it’s conclusion, other than it let’s avoid the very quesiton of does .999… = 1 and if so how? If not what is 1 - .999…? It simply avoids the question altogether.

Trinopus,

What you are saying is you can prove .999… is as close to 1 as you wish to prove, but you cannot prove it EQUALS 1.

If there was a proof… it would have been long ago given. Instead we are asked to accept the definition of a limit, which says if you can prove a number is close enough to another number with arbitrary accuracy (just short of infinity), then let’s just call it that number. It is in effect a very miniscule rounding off at the infinity decmial place.

We can and have. You refuse to accept such proofs, which says a lot about you and nothing about the proofs.

It’s not being ignored because it does not exist: there is no end to the infinite string of 9s. If you start counting the natural numbers 1,2,3,… you can go on forever: there is no last natural number. Similarly, an infinite decimal fraction has no last digit.

[quote=“Indistinguishable, post:182, topic:27517”]

erik150x, I never mocked or made fun of you. The thing you need to know is that people using the conventional definitions of limits and “real numbers” aren’t wrong; they’re just using definitions that are in tune with what they are interested in. They are absolutely correct, according to their definitions, which is as correct as it gets in mathematics.

ME: My comments regarding being mocked were not directed towards everyone here. I have enjoyed very much the ones who have given me at least some respect. In fact you have given me the most, and I appreciate it mostly becuase you have had the most respectable resposes. (Some) on here were very foolishly spouting of about my ignorance without even understanding the very basis for their own belief in the matter of .999… = 1.

Where is the proof with out using limit based calculus?

You do understand that limit based calcuslus starts out with the assumption, with out proof, that they are equal? It never does make any claim to such a proof.

You perhaps should read more of this thread.

So how do you propose to begin your addition?

Mine always starts at the last decimal place. Where does yours start? :wink:

You start at the beginning, but recognise that every step is a partial sum.

The number .999… is the limit of the series .9, .99, .999, .9999, …, so to add it to itself, you get:

.9 + .9 = 1.8
.99 + .99 = 1.98
.999 + .999 = 1.998
.9999 + .9999 = 1.9998

.999…999 + .999…999 = 1.999…998

So, given enough steps, you can have as many 9s as you like after the decimal point, and the value of the final 8 can be as small as you like. With limits, “as many as you like” is an infinite number, and “as small as you like” is zero. The limit of the sum is 1.999… (i.e., 1 followed by an infinite number of 9s), which is equal to 2, because if you go far enough you can make it as close as you like to 2.

I wish to express that I do not reject the idea of using limits. They are a spectacular creation allowing us to progress mathematics in many ways. However there is at least 2 different ways I think you can interpret limits:

  1. Limits express the ultimate truth in the real number system about the issue of .999… = 1 and that truth is that they are indeed they are precicely the same.

(It would seem from what I gather on here, at least many are being tought this principle)

At some point, not sure when this occurred, I don’t ever remember being tought it, that we now incorporate this Limit definition right into the very definition of numbers. Wow, that blew me away. Either I am just old, or things have chnaged recently, or maybe I skipped class that day. Very possible.

  1. You can view Limits as a tool, a very very powerful wonderful tool, but none the less just a tool which helps us avoid these troublesome questions about things like does .999… = 1?

I think within most applications of math it doesn’t really matter which view you take. I don’t really know what practical matter really involves the quesiton of .999… = 1? There could be, I don’t know them is all. But for me I do not see how they are equal from a fundemtnal standpoint of the sequence:

9/10 + 9/100 + 9/1000 + … + 9/infinity

At the end of the sequence all I see is .999… I don’t know how you get to 1. Just saying it so by deinfition of limit which basically just says trust us… they are they same.

They could be equal… but I have not seen anythign to make me think they are and I have never seen a proof. Of course I won’t get one from Limit based calculus because they just assume they are equal. What right is there to do so, other than it works quite nicely for the vast majority of mathematics.

If you believe Limit based calculus express some ultimate truth in this matter, ok. When someone asks you if .999… = 1, just tell them in the standard analysis or real numbers we assume them to be true is all, and in fact we deinfe the numbers that way. I leave it to you to tell them why. But don’t give them any proof, becuase you have none.

Very good. Now do that same analysis with 10 (.999…998), you are only adding 2 of them you need 10 to simulate mulitplication by 10. Notice how you have an 8 at the end now? What do you think you will have in that exact decmial place after adding 10 of them?

It’s not that you’re old or things have changed recently; these definitions have been standard since the 1800s.

Rather, the problem is that most people are never taught anything about them. The typical math curriculum for anyone who isn’t a math major never (or just barely, as in “It gets half a page in the part of the textbook we’ll never be looking at”) actually explicitly defines “real numbers” and the interpretation of infinite decimal notation, even while expecting students to be comfortable with these.

This is maybe not the worst thing in the world. You can get a lot done with “real numbers” without being too formal about them; indeed, for many purposes, it would be premature to fix their formalization.

But it does lead to people (quite naturally) making up their own meanings for infinite decimal notation, and then being surprised and argumentative when their particular reconstructed definitions turns out to conflict with others’ or the never mentioned standard.