.999 = 1?

The way I see it, .999 is a “REPEATING” number. If a number is repeating to infinity it can’t be subtracted without rounding.

If N = 0.9999…

10N - N = 9.9999… - 0.9999… is not a valid operation without rounding because you can’t subtract a number that repeats to infinity, even with a similar number that repeats to infinity.

If the order of operation is to multiply 10 by N, then 10N is a repeating number as well. You can’t act on it with another operation without rounding.

If there is no end to it, you can’t subtract from it.

So, 0.999… does not equal 1!

That’s my 0.1$ anyway,

Joe

I think you’re confusing the number itself with the representation (way of writing that number).

Plus, see the post directly above yours.

As Thudlow Boink and Francis Vaughan stated, we’ve covered this repeatedly. You can certainly subtract a number that repeats indefinitely from another one. You put the numbers after the decimal point into a one-to-one correspondence. When you subtract, every single one of them cancels out and nothing is left. It’s quite basic.

But maybe think about it this way. If it can’t be done, then why does every single mathematician say that it can? They’re usually the ones who are doing the correcting. Try dividing by zero and see how fast they’ll jump down your throat. So why aren’t they correcting this? Why is it stated only by amateurs who don’t quite grasp the meaning of infinite representation? (Or of rounding, for that matter.)

One more thing. Disproof of math is only possible by using math. Saying verbally that something can’t be done isn’t very convincing. If you can’t show mathematically that’s it’s wrong - and many posters on this thread have given a taste of what the real math behind the subject is like so you can read for yourself what it would take to make the argument - then you probably don’t understand it well enough to make any statement about it. Most people don’t, and that’s no slur on them. The mystery is why anyone who doesn’t know math tells professional mathematicians that they are wrong. That one has always baffled me.

You could try to read the thread first. I realize it’s quite long, but you could actually read the first few pages - the rest is just a continual re-hashing of the same ideas.

Here’s the basic problem in your thinking: if two numbers are not equal, then there is a difference between them. Please tell me what the difference is, between 0.99999… and 1.000. Also, if two numbers are not equal, then there is another number that’s between them. What number is between 0.99999… and 1.000?

.000…1

And if your response is that .000…1 is not a number please provide references to support your claim. Remember, you didn’t say it had to be a real number, you simply said number. Once you are done with providing those references you can provide further references to support the claim that .000…1 is not a valid concept that can be seen as the difference between the concepts .999… and 1.

I could be a smartass, and say, “(1 + 0.9999…)/2”, but that’s a number distinct from 1 and from 0.9999… if and only if 1 and 0.9999… are distinct from each other.

Which of course they aren’t, since it’s easy to show that for any epsilon > 0, |1 - 0.9999…| < epsilon. And if they were distinct, there would be a value of epsilon for which that inequality would be false.

For all that is cute and cuddly, READ THE THREAD. Or, put better:

This has been hashed out multiple times already just in this thread alone. I have no clue why you think other people should do the leg work (AGAIN!) when you haven’t likewise shown the common courtesy of examining it the first N times it was presented in this very thread.

Have you read the entire thread first?

It’s not about 1/infinity or limits or sums or what have you. It’s all about notational convention.

Here’s my standard reply to this debate:

Keep in mind that one must distinguish between notation and what that notation represents. Different notation can represent the same entity, as in, for example, the equality of “1/3” and “2/6”: they are not equal as notation, but the fractions they denote are equal.

Now, does “0.9999…” denote the same thing as “1”?

Well… first off, a disclaimer: of course, one could invent an interpretation of this notation on which they denoted different things, just as one could invent an interpretation of notation on which “1/3” and “2/6” denoted different things (for example, they denote different dates…). But I’m not going to talk about that sort of thing right now. Instead, I’m going to talk about the standard, conventional interpretation of infinite decimal notation, the one that mathematicians mean when they use this notation, and the one which justifies the claim that “0.9999…” denotes 1.

When a mathematician gives an infinite decimal as notation for a number, what they mean by it is this: the* number which is >= the rounding downs of the infinite decimal at each decimal place, and <= the rounding ups of the infinite decimal at each decimal place. This is the definition of what infinite decimal notation means; it’s true because we say it is, just as the three letter word “dog” refers to a particular variety of four-legged animal because we say it does.

So, for example, when a mathematician says “0.166666…”, what they mean, by definition, is “The number which is >= 0, and also >= 0.1, and also >= 0.16, and also >= 0.166, and so on, AND also <= 1, and also <= 0.2, and also <= 0.17, and also <= 0.167, and so on.” What number satisfies all these properties? 1/6 satisfies all these properties. Thus, when a mathematician says “0.16666…”, what they mean, by this definition, is 1/6.

Similarly, when a mathematician says “0.9999…”, what they mean, by that same definition, is “The number which is >= 0, and also >= 0.9, and also >= 0.99, and also >= 0.999, and so on, AND also <= 1, and also <= 1.0, and also <= 1.00, and also <= 1.000, and so on.” What number satisfies all these properties? 1 satisfies all these properties. Thus, when a mathematician says “0.9999…”, what they mean, by definition, is 1.

[*: Of course, when one says a thing like “THE number which is…”, this may be taken to involve an implicit claim that there is a unique such number. So when mathematicians use infinite decimal notation, they also generally have a very particular number-system in mind in which these uniqueness claims are all justified. But, there are many other number-systems (just as useful ones, or even more useful ones, for many purposes; the world is diverse and our mathematical analyses needn’t be shoehorned into “one size fits all” form) in which there may be no number or many different numbers satisfying such systems of constraints; in such contexts, infinite decimal notation is generally less useful as a way to denote numbers, though it can still be used in essentially the same way to denote certain intervals instead.]

Furthermore, your argument using density is applicable (once again) to Real Numbers. The whole point is that you cannot apply such arguments when you are expanding the scope of discussion and understanding to discern between concepts which have not been accounted for by the system in question any more than I can say I decree it that red x blue = white and use that statement to support or exclude future claims.

You are on the right track.

.999… is not a number or even a quantity, it is a sequence which involves the concepts of infinity and infinite precision.

In order to understand it’s relationship to the simpler concept of the integer 1 you cannot simply invoke arithmetic operations to try to arrive at an understanding any more than you can multiply blue times red and come up with something meaningful. This is why the equation .999… = 1 is nothing more than a trivial anomaly caused by the way the real number system is defined.

Incidentally, the real number system does allow for the operations of addition and multiplication of these sequences. How do you add or multiply two infinite sequences? If I add .111… and .111… how do I see what the answer is?
It cannot be calculated rigorously, the best we can do is step back and trust that the pattern repeats itself all the way out to infinity and do our best to see what the answer is. When we write down .222… as the answer we are in fact saying that we can add the last terms of those two sequences at infinity to come up with an answer.

By definition, there is no last term. Infinity means neverending. Cantor’s entire revelatory work was about how to handle expressions that did not end, precisely to avoid the confusion and imprecision that you are showing. If you read the thread - and not doing so is an insult to the many mathematicians that have put in hours of wonderful work explaining this - you would see the necessity for clear definitions of terms that you are fudging.

And by your use of the meaningless term “at infinity” I am again assuming you wrote the pdf you linked to.

And again, it is NOT meaningless.

Tru dat: the ‘point at infinity’ has a defined meaning in projective geometry.

‘At infinity’ does NOT have a defined meaning in the arithmetic of infinite sequences and sums.

Mixing a bit of geometry with analysis here.

The geometric “at infinity” is different from the decimal representation “at infinity”. It has meaning in the geometric sense (literally a different definition).

So, the original point stands. It IS meaningless in the decimal representation sense, as we typically define decimal representations of numbers.

As noted (repeatedly) in this thread, that doesn’t preclude a non-typical definition of decimal representation that allows “at infinity” to make sense.

Is this better? Limits at infinity.

Acknowledged, I confess that I have not read the entire thread yet as it is something like 200 letter size pages long but if that is that present consensus prerequisite for commenting here I will play by the rules…over and out.

Not particularly. It’s one of those cases where we use vernacular language fuzzily to describe a mathematical phenomenon.

Nothing is actually happening “at infinity” in this case. If we went at it with full-bore mathematical formalism, the sentence would be expressed differently. But for pedagogical purposes, it’s easier (though not 100% correct) to say “at infinity”. Not everybody wants or needs to go into all the abstraction and formalism, though, as this thread shows, those are incredibly important when somebody DOES want to discuss a topic in detail.

The purpose at that site is to teach students, not to be absolutely rigorous in use of mathematical terms. I certainly don’t have a problem with that, since full formalism is often confusing to students while a less-than-strictly correct description captures most of the concepts without getting bogged down in the details. That said, a good teacher should also explain at some point that many of the finer points are being glossed over.

Since we’ve been going this whole time talking about defining new systems with axioms that do whatever you want which leaves me with a question – have there been any useful systems defined where division by multiplicative zero (or whatever you want to call the transform x * y = x for all y*) exists? I mean, never mind that it makes little to no sense to divide something into zero parts (aka totally annihilate it), I’m just curious if it’s been defined in any useful system.

  • As opposed to multiplicative identity – x * y = y for all y.

Look, whether you like it or not, the term is used with monotonous regularity and in formal contexts, e.g., here. And that was just the first semi-scholarly hit I happened to find.

Now if you want to try to say it’s just meaningless argot and no one actually means what they say when they use it, fine.

But you’re wrong.

[quote=“Indistinguishable, post:788, topic:27517”]

Have you read the entire thread first?
When a mathematician gives an infinite decimal as notation for a number, what they mean by it is this: the* number which is >= the **rounding downs **of the infinite decimal at each decimal place, and <= the rounding ups of the infinite decimal at each decimal place. This is the definition of what infinite decimal notation means; it’s true because we say it is, just as the three letter word “dog” refers to a particular variety of four-legged animal because we say it does.

QUOTE]

Thanks for clarifying…
1 does not equal 0.99999…

They are not the same, but to make math easier to understand they are rounded and taken to be the same.
Just like if we call a dog’s tail a “leg” The dog now has 5 legs because we say he does, not because the tail is really a leg. Ignoring the difference doesn’t mean there’s no difference.

At the end of the day the real question is…what difference does it make (pun intended) :smack:
Joe