Straight Dope Message Board > Main .999 = 1?
 Register FAQ Calendar Mark Forums Read

#101
08-04-2012, 11:33 AM
 Lumpy Charter Member Join Date: Aug 1999 Location: Minneapolis, Minnesota US Posts: 10,933
What's 9/9? You could say 1; or you could divide it like this:

Code:
```
0.99999
9|9.00000
81
90
81
90
81
90
81
90
81```
etc.
#102
08-04-2012, 11:35 AM
 OldGuy Charter Member Join Date: Dec 2002 Location: Very east of Foggybog, WI Posts: 2,088
Remember: It's turtles all the way down.
#103
08-04-2012, 11:39 AM
 erik150x Guest Join Date: Aug 2012
Quote:
 Originally Posted by Exapno Mapcase Once again, this is wrong. Utterly, completely, factually wrong. No mathematicians would ever describe this as a trick. It is a proof.
You need to read more. You say that, but don't really know it. You can find lots of debate on it. Sure if your in the camp that says .999... = 1 you probably accept it, but I have seen some that even though they believe .999.. = 1 , don't accept that as a rigorous proof.
#104
08-04-2012, 11:44 AM
 Francis Vaughan Guest Join Date: Sep 2009
Quote:
 Originally Posted by erik150x Your presupposing that 10 x .999... = 9.999... can you prove that?
Great Antibob did just that in post 95.
#105
08-04-2012, 11:44 AM
 The Second Stone Guest Join Date: May 2008
At the risk of starting something new, is:

.999... < 1 ?
#106
08-04-2012, 11:49 AM
 erik150x Guest Join Date: Aug 2012
Quote:
 Originally Posted by Exapno Mapcase It's an infinite number of 9s because I'm using the definition of what an infinite number is. That you don't know the definition of what an infinite number is pretty much sums up what everybody has been trying to explain to you.
There is no such thing as an infinite number. Infinity is an abstract concept, yet you claim .999... is an infinite number? I'd like to see the text book that says that. It has an infinite number of decimal places. What happens when you shift an infinite number of decimal places? Do you actually learn that somewhere? Or someone showed you a proof involving 10 x .999... and you accepted it.

Which set has a greater number of elements:

Natural Numbers or Rational Numbers?

and also

Rational Numbers or Real Numbers?
#107
08-04-2012, 11:50 AM
 Francis Vaughan Guest Join Date: Sep 2009
Quote:
 Originally Posted by The Second Stone At the risk of starting something new, is: .999... < 1 ?
The irony is that one of the biggest puzzles in early mathematics was how an infinite series could sum to any finite value at all. The intuition was that it should clearly be itself infinite in value, being made from an infinite number of non-zero terms. But there it is, finite.
#108
08-04-2012, 11:50 AM
 erik150x Guest Join Date: Aug 2012
Quote:
 Originally Posted by The Second Stone At the risk of starting something new, is: .999... < 1 ?

yes. if you ask me. ;-)
#109
08-04-2012, 11:55 AM
 erik150x Guest Join Date: Aug 2012
Quote:
 Originally Posted by Francis Vaughan The irony is that one of the biggest puzzles in early mathematics was how an infinite series could sum to any finite value at all. The intuition was that it should clearly be itself infinite in value, being made from an infinite number of non-zero terms. But there it is, finite.
I am well aware of the summation of an infinite series which converges. It relies on the concept of limits for it answer, which again for the the umpteenth time simply asks you to accept that if you can prove a function approaches with arbitrary precision some number that it is actual = the number, but never proved. It is a fundamental principle of calculus, and indeed very practical, but none the less not really a proof in and of it self that .999... = 1. If you don't understand that then you don't understand the definition of a limit.
#110
08-04-2012, 11:59 AM
 Francis Vaughan Guest Join Date: Sep 2009
Quote:
 Originally Posted by erik150x I am well aware of the summation of an infinite series which converges. If you don't understand that then you don't understand the definition of a limit.
#111
08-04-2012, 12:03 PM
 Francis Vaughan Guest Join Date: Sep 2009
Quote:
 There is no such thing as an infinite number. Infinity is an abstract concept,
So are the reals and the imaginary numbers. Possibly the negative integers and some might argue zero.

If you claim there is no such thing as an infinite number, would you care to proffer a definition for an infinite series? One that we might all agree on?
#112
08-04-2012, 12:09 PM
 erik150x Guest Join Date: Aug 2012
Quote:
 Originally Posted by Francis Vaughan Great Antibob did just that in post 95.

sum(i = 1 to "infinity") [9/10^(i-1)]

Written in a more 'standard' form, this is 9.99999........

So now you are summing not to infinity but (i -1), I hardly see how that is 9.999..., or at least a given. Why not just start out with 10^(i-1), or (i-2) or (i-3) or (i-4)...

It would seem your division by 10 is almost meaningless? But I would contest that in the process you adding [9/10^infinity] to your answer - which you may call zero, but I don't.

I'll give you the best attempt yet. ;-)
#113
08-04-2012, 12:15 PM
 erik150x Guest Join Date: Aug 2012
Quote:
 Originally Posted by Francis Vaughan So are the reals and the imaginary numbers. Possibly the negative integers and some might argue zero. If you claim there is no such thing as an infinite number, would you care to proffer a definition for an infinite series? One that we might all agree on?
What I am trying to point out here is there is room for uncertainty.

The set of Natural numbers is countably infinite. So is the set of Rationals, in fact exactly the same ordinal for that matter. See Cantor's diagonal. But the set of Reals is uncountably infinite and thus larger. What kind of sense does that make...? I don't really know, but Cantor proved it. So when you go start talking about how to manipulate an infinite number of 9s multiplication-wise or other. You are in very uncertain territory.
#114
08-04-2012, 12:17 PM
 erik150x Guest Join Date: Aug 2012
Quote:
 Originally Posted by Francis Vaughan I wasn't replying to any of your posts.
Sorry, I am a little on the defensive here, as you might be able to gauge.

My apologizes.
#115
08-04-2012, 12:21 PM
 erik150x Guest Join Date: Aug 2012
Quote:
 Originally Posted by erik150x What I am trying to point out here is there is room for uncertainty. The set of Natural numbers is countably infinite. So is the set of Rationals, in fact exactly the same ordinal for that matter. See Cantor's diagonal. But the set of Reals is uncountably infinite and thus larger. What kind of sense does that make...? I don't really know, but Cantor proved it. So when you go start talking about how to manipulate an infinite number of 9s multiplication-wise or other. You are in very uncertain territory.
"So is the set of Rationals, in fact exactly the same ordinal for that matter."

I meant ...the same cardinality for that matter.
#116
08-04-2012, 12:23 PM
 Francis Vaughan Guest Join Date: Sep 2009
Quote:
 So when you go start talking about how to manipulate an infinite number of 9s multiplication-wise or other. You are in very uncertain territory.
That is trivial. Colloquially we mean Aleph Null when we say infinity. That is all. Go back in all the arguments and substitute Aleph Null where needed. We do not ever need any other transfinite numbers for this discussion. Having done that, answer my question - define an infinite series.

Quote:
 So now you are summing not to infinity but (i -1),
Which by definition is infinity. That is the critical point. Infinity + 1 is infinity. Infinity - 1 is infinity. Infinity * infinity is infinity. It simply doesn't change the meaning of the expression to use the "- 1" If you don;t accept that infinity behaves like this, say so, but then offer your own explanation of what these expressions mean. (And again, note that by infinity I mean Aleph Null.)
#117
08-04-2012, 12:35 PM
 RealityChuck Charter Member Join Date: Apr 1999 Location: Schenectady, NY, USA Posts: 32,979
Quote:
 Originally Posted by erik150x What I am trying to point out here is there is room for uncertainty. The set of Natural numbers is countably infinite. So is the set of Rationals, in fact exactly the same ordinal for that matter. See Cantor's diagonal. But the set of Reals is uncountably infinite and thus larger. What kind of sense does that make...? I don't really know, but Cantor proved it. So when you go start talking about how to manipulate an infinite number of 9s multiplication-wise or other. You are in very uncertain territory.
Let me get this straight. You're willing to accept Cantor's proof, even though you don't seem to know what it is, while an infinitely more simple proof of .99999... = 1 can't be right?

Let's face it, you can throw terms around, but you really are shaky on your knowledge of math.

Also, the number of digits in .99999... is aleph null, since the number of digit is always a whole number. Bringing in C is just a smokescreen without meaning.
#118
08-04-2012, 12:54 PM
 Indistinguishable Guest Join Date: Apr 2007
Quote:
 Originally Posted by erik150x Many thanks to Indistinguishable and Senegoid for well thought out responses. Re: "Similarly, when a mathematician says "0.9999...", what they mean, by that same definition, is "The number which is >= 0, and also >= 0.9, and also >= 0.99, and also >= 0.999, and so on, AND also <= 1, and also <= 1.0, and also <= 1.00, and also <= 1.000, and so on." What number satisfies all these properties? 1 satisfies all these properties. Thus, when a mathematician says "0.9999...", what they mean, by definition, is 1" Pretty stupid notation, why not just say "1". Seriously is the some Mathematical Authority which has proclaimed this definition? What if I just want to talk about the geometric series 9/(10^x ) for x = 1 to Infinity and I do not want any assumptions that say well intuitively if we get arbitrarily close to some number we'll just call it that number. I'm not to say that this is not reasonable to make such assumptions, but they are assumptions not proofs. What is the smallest number less than 1? If I say it is .999... as defined by the geometric series geometric series 9/(10^x ) for x = 1, what proof can be offered I am wrong with out the assumptions made by a limit process? I don't know why any one would find that mathematical trickery like saying 10 x .999... is 9.999... where one must add this mysterious 9/infinity to the end is comforting. I would argue it is the opposite, a trick is something to fool you into believing something other than the actual reality. I will repeat once more that when you use the Limit process you are making an assumption that x is close enough to n, that we will just call it n. Perhaps it is n, but it is not a proof is it? I don't need a proof that 1 -1 = 0. I can accept that a given. But to me it seems many have taken the result of a limits as a mathematical practicality to the conclusion that they are in fact some undeniable truth. Okay so you can say by the axioms of the real number system, .999.... must be 1. Fine but the real number system is not perfect is it? Does it describe (-1)^(1/2)? Does it describe 1/infinity? I can accept that by the rules proclaimed by the axiom of Real Numbers or whatever .999... must be 1. But can some one tell me what 1/infinity is or why it is not logical to assume 1 - .999... would be 1/infinity?
I said I would leave having said my piece. However, I want to point out one more thing:

Nothing you've said in this post is really objectionable.

We CAN make up mathematical systems which have infinitesimals, and they ARE useful for some purposes. We could even decide to have some interpretation of non-terminating decimal notation into those systems on which 0.999... = 1.

For example, one simple system is like so: let's say a hyperrational is a non-terminating sequence of rationals; for example, <0, 0, 0, ...> or <2, 3, 4, 5, ...> or <3, 3.1, 3.14, 3.141, ...>. All operations you can think of will be done component-wise, so, for example, <2, 3, 4, 5, ...> + <3, 3.1, 3.14, 3.141, ...> = <5, 6.1, 7.14, 8.141, ...>, and max(<2, 3, 4, 5, ...>, <3, 3.1, 3.14, 3.141, ...>) = <3, 3.1, 4, 5, ...>).

And in the same way we can talk about hyperintegers and hyperbooleans (Yes or No values) and hyper-anything else you like, and operations between them.

Finally, we'll consider two hyper-whatevers to be equal so long as their components are equal from some point on. Thus, <2, 3, 4, 5, ...> = <3, 3.1, 4, 5, ...>. Put another way, we'll consider a hyper-boolean to be straight-up Yes just in case its components are all Yes from some point on; thus, the question "Is <2, 3, 4, 5, ...> greater than <4, 4, 4, 4, ...>?" has the hyperboolean answer "<No, No, Yes, Yes, Yes, Yes, Yes, Yes, ...>", which amounts to straight-up "Yes".

This system acts a lot like ordinary arithmetic. But it has infinite and infinitesimal values. For example, <2, 3, 4, 5, ..> is infinite, in the sense that it is larger than 0, larger than 1, larger than 2, larger than 3, etc. It is larger than any standard integer. And its reciprocal <1/2, 1/3, 1/4, 1/5, ...>, conversely, is infinitesimal; positive but smaller than 1/n for any standard integer n.

And there's a natural way to interpret non-terminating decimal notation into this system: interpret a.bcd... as <a, a.b, a.bc, a.bcd, ...>. So 0.999... becomes <0, 0.9, 0.99, 0.999, ...>, and 1.000... becomes, of course, <1, 1, 1, 1, ...>. And are these equal? <No, No, No, No, No, ...>. The difference between them is the infinitesimal value <1, 0.1, 0.01, 0.0001, ...>; that is, 1/10^infinity, where "infinity" is the canonical infinite value <0, 1, 2, 3, 4, ...>.

This system probably captures very closely the intuitions you yourself are trying to express. For example, it has a value halfway between 0.999... and 1: <0.5, 0.95, 0.995, 0.9995, ...>. This value, alas, has no representation in ordinary decimal notation, but we couldn't have expected it to. Still, it's there and acts exactly like you'd want it to.

And this system IS useful, and used to do nontrivial mathematics. If names matter, it provides the underpinnings of "(Robinson-style) nonstandard analysis".

So your line of thought is not useless, and not fundamentally broken.

HOWEVER:

For many purposes, people don't care to discuss infinite values, and don't care to distinguish between values that are infinitesimally close.

If we restrict ourselves to the finite hyperrationals, and stop distinguishing between hyperrationals that are infinitesimally close (so, for example, <1/2, 1/3, 1/4, 1/5, ...> would be treated as equal to <0, 0, 0, 0, ...>), then we get... the standard system of "real numbers", in all its Archimedean glory. And, in particular, 0.999... and 1 become equal, because they are infinitesimally close.

So that's why "real numbers" are useful: they model reasoning at any level where you aren't actually concerned with drawing fine distinctions between values which are infinitesimally close. And at that level of coarseness, 0.999... and 1 are going to be equal.

If you want to draw finer distinctions, you can, and you can make up rules for doing so. For example, the above. These rules will have their drawbacks so far as interfacing with decimal notation goes (no longer will every value have a decimal representation; no longer will it be possible to multiply values by 10 simply by shifting their decimal representation [this will only work for values with finite decimal representations]), but that can be alright.

In mathematics, it's up to you what you want to model and how you want to model it. Always.

The only thing is that you need to know and understand others' conventions when talking to them. And other people very often are talking about real numbers and very rarely are talking about hyperrationals.

Last edited by Indistinguishable; 08-04-2012 at 12:57 PM.
#119
08-04-2012, 01:00 PM
 KarlGauss An old man in a dry month Charter Member Join Date: Mar 2000 Location: Between pole and tropic Posts: 5,917
To erik150x:

Forgetting about all the proofs and definitions and "arguments", do you honestly believe that you have discovered something that's been missed by every mathematician who's ever lived?

#120
08-04-2012, 01:03 PM
 Indistinguishable Guest Join Date: Apr 2007
Quote:
 Originally Posted by Indistinguishable We CAN make up mathematical systems which have infinitesimals, and they ARE useful for some purposes. We could even decide to have some interpretation of non-terminating decimal notation into those systems on which 0.999... = 1.
Er, I of course meant "on which 0.999... doesn't equal 1".

Last edited by Indistinguishable; 08-04-2012 at 01:05 PM. Reason: And now, having said my piece, I leave...
#121
08-04-2012, 01:11 PM
 CookingWithGas Charter Member Join Date: Mar 1999 Location: Tysons Corner VA Posts: 8,995
erik150x apparently joined this board just today to awaken a thread that was 12 years old and stonewall everyone who has posted factual information to support the original assertion. Why he feels that this is worth 40 posts in the same thread in the same day I have no idea. And clearly there is no factual argument that will sway him so I for one am saving my energy for arguments of opinion rather than of fact.
#122
08-04-2012, 01:13 PM
 Senegoid Guest Join Date: Sep 2011
Quote:
 Originally Posted by Exapno Mapcase Once again, this is wrong. Utterly, completely, factually wrong. No mathematicians would ever describe this as a trick. It is a proof.
(Referring to the 10x=9.999... method of proving that .999... = 1)

Well, maybe mea a little bit culpa. I called it a "trick" several posts up. Actually, that was a bit of a rhetorical trick.

I was making the point that infinitely long decimal numbers have obtain meaningful values by definition, -- that is, they are defined as the sum of an infinite series, the very meaning of which exists because we've defined a meaning for it. You don't need any kind of "proof" for that.

To be sure, defining that the sum of an infinite series is the limit of the sequence of partial sums doesn't tell you anything about what that actual value is. You still have to find a way to compute that. You could call your computation a "proof" if you want -- that was the sense I was trying to convey in calling it a "trick."
#123
08-04-2012, 01:15 PM
 jtgain Guest Join Date: Jul 2007
Quote:
 Originally Posted by The Second Stone At the risk of starting something new, is: .999... < 1 ?
No. It's so infinitely close (literally) to 1 that math has defined it as equal to one for any practical, theoretical, or any purpose whatsoever. Making this assumption allows many other advanced mathematical calculations. If we need to be hyperliteral and say that it is really less than one, we foreclose a bunch of other math for no good reason at all.

As I said earlier, should Lowes, instead of having a pile of 2X4s, have individual SKU items numbers for each cut that are thousands of an inch off just to be technically accurate?
#124
08-04-2012, 01:19 PM
 jtgain Guest Join Date: Jul 2007
Missed the edit window:

IOW, see that "<" symbol you used? That's a math symbol. Thus, the science of mathematics get to define what that symbol means and if it decides that < means something more than hyperliterally less, then that's what it means.

There's really no more of a trick to it than this. It's close enough, so math says it's equal.
#125
08-04-2012, 01:53 PM
 CookingWithGas Charter Member Join Date: Mar 1999 Location: Tysons Corner VA Posts: 8,995
Quote:
 Originally Posted by jtgain As we do that we see that the number we get from 1/0=infinity.
This is incorrect. The value of 1/x approaches infinity as x approaches 0. However, division by 0 is undefined by mathematics so 1/0 is a meaningless value.

Quote:
 Originally Posted by jtgain However, this fudge by the science of mathematics is so fine that it works for any calculation of anything in the universe. If you want to argue that .9999 is not truly, really and absolutely, 1, then I would agree and I think posters here would as well, but the argument is so pedantic that it serves no useful purpose. It is soooo close to 1 that for all purposes it is 1, so what's the point of not saying it's 1? And by sooo close, I mean infinitely close.
Incorrect. It is exactly equal to 1. This is an artifact of number theory and the base 10 system. For example, the number 1/3 is expressed in base 10 as 0.333..... but in base 3 it is 0.1. It is quite an exact value. If you multiple 0.333..... x 3 in base 10 you will get 0.999.... which is 10.0 x 0.1 in base 3 which is exactly 1.03. Not just really, really close, but exactly the same.

Quote:
 Originally Posted by jtgain In mathematics, things that are .99999.... are 1 because that's good enough for anything in the universe. Is there a reason not to make the statement that .999...=1?
No, it's not because it's infinitely close, it's because exactly the same.

Quote:
 Originally Posted by jtgain The proof of .999..=1 is that it is close enough for government, private sector, and any other type of work such that there is no discernible difference between the two anywhere in the universe, so much so that one is in fact equal to the other.
The proof shows not they there is no discernible difference. There is no difference even in theory--they are the same.

Quote:
 Originally Posted by jtgain No. It's so infinitely close (literally) to 1 that math has defined it as equal to one for any practical, theoretical, or any purpose whatsoever.
It's not arbitrarily defined as the the same because it's close enough. It is in fact the same as a consequence of number theory.

Quote:
 Originally Posted by jtgain There's really no more of a trick to it than this. It's close enough, so math says it's equal.

Last edited by CookingWithGas; 08-04-2012 at 01:54 PM. Reason: added blue word to correct opposite meaning as intended.
#126
08-04-2012, 02:18 PM
 Great Antibob Guest Join Date: Feb 2003
Quote:
 Originally Posted by erik150x So now you are summing not to infinity but (i -1), I hardly see how that is 9.999..., or at least a given. Why not just start out with 10^(i-1), or (i-2) or (i-3) or (i-4)...

My original sum went from i = 1 to i = infinity (and I clearly stated summations were one of the few contexts for which infinity was well defined, and I can expound on this, if you'd like). The sum at the end went from i = 1 to i = infinity.

The indices never changed. They are the same indices from before. You are refuting a statement that wasn't even made in the first place.

Also, to address another one of my mathematical pet peeves:

A number with an infinite decimal expansion is NOT the same as an 'infinite' number.

Pi has an infinite decimal expansion. Pi is not "infinite". It has a value. It happens to be greater than 3 and less than 4. That's hardly "infinite". Just because a number has a value that cannot be expressed finitely using a decimal expansion does not mean it does not have a definite and finite value.
#127
08-04-2012, 02:31 PM
 jtgain Guest Join Date: Jul 2007
Quote:
 Originally Posted by CookingWithGas Incorrect. It is exactly equal to 1. This is an artifact of number theory and the base 10 system. For example, the number 1/3 is expressed in base 10 as 0.333..... but in base 3 it is 0.1. It is quite an exact value. If you multiple 0.333..... x 3 in base 10 you will get 0.999.... which is 10.0 x 0.1 in base 3 which is exactly 1.03. Not just really, really close, but exactly the same.
I see, and I believe that you have convinced me. 1/3 X 3=1. No question. Just because the only way we can express 1/3 is base 10 numerals is .333333.... does not mean that it is slightly (even infinitely slightly) less than 1/3. It is in fact 1/3.

Is that the long and the short of it?
#128
08-04-2012, 02:39 PM
 Great Antibob Guest Join Date: Feb 2003
Quote:
 Originally Posted by jtgain I see, and I believe that you have convinced me. 1/3 X 3=1. No question. Just because the only way we can express 1/3 is base 10 numerals is .333333.... does not mean that it is slightly (even infinitely slightly) less than 1/3. It is in fact 1/3. Is that the long and the short of it?
Yes. That's it exactly.

The rest of this started out as a double post, but I guess I took a bit too long to write it out:

Quote:
 Originally Posted by CookingWithGas This is incorrect. The value of 1/x approaches infinity as x approaches 0.
This is mostly but not precisely correct.

1) 1/x has no well defined limit. If we define a "right-sided" limit, this statement is true. But as x approaches 0 from the left hand (negative) side, the value of 1/x does not become unbounded in the same way as it does from the right hand (positive) side. You get +inf vs -inf, in other words.

2) We don't need to even use the word "infinity" for this limit. It's an incredibly useful shortcut, but it's not strictly necessary. We can simply say the limit becomes unbounded.

In a more formal way, say we have a sequence 's', so that the terms are s_i, with i a positive integer. If for every M > 0, there exists an integer I such that for i > I we have that all s_i > M, then we say the limit of the sequence exists and is unbounded.

To go through that definition thoroughly is the subject of at least a half hour lecture, and I'd want to refine it a bit if I ever presented it to a class, but it gets at least 90% of the way there.

If we want to use the word "infinity" as in the extended real number system with +inf and -inf, we say that the limit of s_i above is +infinity, if this is the case. Likewise, we can make a similar limit definition for -inf.

There are so many connotations associated with the word "infinity" itself that we run into problems when our "common sense" notion of infinity runs up against what is actually defined.

We don't really need to use the word "infinity" itself. Make up a different word. Say "bignum". We define "bignum" such that "bignum" > r, if r is a standard real number. And "-bignum" < r if r is a standard real number. Of course, "bignum" is my stand-in for "infinity" but with less of the normal baggage.

Last edited by Great Antibob; 08-04-2012 at 02:41 PM.
#129
08-04-2012, 04:28 PM
 Exapno Mapcase Charter Member Join Date: Mar 2002 Location: NY but not NYC Posts: 20,943
Quote:
 Originally Posted by KarlGauss To erik150x: Forgetting about all the proofs and definitions and "arguments", do you honestly believe that you have discovered something that's been missed by every mathematician who's ever lived? I am very interested to hear your answer.
I'm very curious about this myself. We see this behavior regularly here, especially on topics about physics and math. There's no disgrace in admitting that you don't understand relativity or QM or, in this case, infinities. They're extremely tricky, they're totally counter-intuitive or contrary to "common sense," and it took the best people in the professions many years to work out the details. If you're not in the field and haven't given it a ton of study you'll never come up with the right answer by just thinking about it. You have to do the heavy math. Yet we seldom get people coming in and saying that they don't understand a point and could someone please give them an explanation they can try to understand. Much more often, we get posters like erik, who insists that every mathematician in the entire world is wrong. And, like erik, they lack even the most basic understanding of what they're saying and why their arguments are immediately dismissible.

So why take the attitude that not only are all the true professionals wrong, everybody in the thread explaining the correct answer in detail are also wrong? That just alienates everybody volunteering to help and it drives experts like Indistinguishable out of the thread. Is this simply a mode of learning? Are teachers familiar with this? I'd think it would be incredibly frustrating if many students took the attitude that all the textbooks are wrong until the teacher can somehow prove them right. I don't see how anyone could successfully teach at all under such conditions. However, we see it so often that it can't be simply an individual aberration. It baffles me.

Quote:
 Originally Posted by Senegoid (Referring to the 10x=9.999... method of proving that .999... = 1) Well, maybe mea a little bit culpa. I called it a "trick" several posts up. Actually, that was a bit of a rhetorical trick. ... You could call your computation a "proof" if you want -- that was the sense I was trying to convey in calling it a "trick."
And technically it is not a formal proof either. It's more of a demonstration of a proof, but it has the same status as saying that if 9x = 9, then x = 1. Which is hardly a trick. Arguing as erik does that you can't know the answer given by multiplying 10 x .99999~ is ignoring that the rules of multiplying infinities have been in place for 150 years or so since Cantor worked them out in exquisite detail. If mathematicians don't know how to work a simple problem like that then why would he take their word for any other bit of math whatsoever? Again, it's a baffling argument.
#130
08-04-2012, 04:34 PM
 erik150x Guest Join Date: Aug 2012
[quote=Great Antibob;15348262]We're not just "accepting" that there is a 9 there.

Here's an example of one of the "contexts" where infinity is actually defined.

Write 0.9999..... in a different form:

sum(i = 1 to "infinity") [9/10^i]

Note that the "infinity" in the index of the summation just tells us not to stop adding more terms ever.

Now, multiply this by 10:

10*sum(i = 1 to "infinity") [9/10^i]

We can bring the 10 "inside" the sum:

sum(i = 1 to "infinity") [10*9/10^i]

Now simplify:

sum(i = 1 to "infinity") [9/10^(i-1)]

Written in a more 'standard' form, this is 9.99999........

There is no '0' at the end at all. Nor are we "adding" any digits at all. We're just multiplying by 10.
-------------------------------------------------------------

You start out with... [9/10^i]
end with [9/10^(i-1)]

No you never changed (i = 1 to infinity), but it is implied by the above changes and though I can't prove what's wrong here, I can't help but feel a subtle trick is at play here which removes the [9/10^(infinity)] originally present.

I do realize (infinity -1 = infinity), so your argument is still valid, and I do admit you have me at a loss to explain what's wrong. It is a wholly unsatisfying argument to me, but i guess that's my problem.

To the many people who have taken my debate here seriously. I thank you.

To the many people on here, who think I have made some important ingenious insight that no one else ever thought of... you misunderstand me. This question comes up over and over because it bothers many many people, including some of the great mathematicians in history. I am certainly not and never will be close to that. However I understand a good deal more than you give me credit for. The people who have taken the time to address my concerns about .999... = 1 in a serious manner understand my plight I think as they have personally probably struggled with at one point or another.

I have not to this point really seen something that makes me say, oh yes, of course .999... = 1. Though Great Antibob's demonstration of 10 x .999... being equal to 9.999... gives me pause for thought. I don't really like the fact that we have to bring (infinity - 1) into it, but I can't say its wrong either.

Many mathematicians have used and still do use infinitesimals (albeit not in the real number system). The idea of 1/infinity is not a ridiculous notion or certain madness as some would say here. Many proofs make use of a limit which is in the end an unproven assertion. It may provide nearly infinite accuracy, but when your talking about the difference between 1 and .999... being almost infinitely accurate hardly seems just? The branch of mathematics have defined the real number system (great minds over a great length of time) to a finely tuned but not infallible system. If you understand Godel's Theorem of Incompleteness, then you know no system is infallible. I did not come on here to try and change the mathematical world, simply to find someone who could convince me that .999... = 1. Or perhaps to find that we as conventions say it is so, but ultimately there is no proof. Perhaps as some have suggested I need to find a number system that suites my taste better.

Thanks once again to all who took the time to 'seriously' discuss this.

Last edited by erik150x; 08-04-2012 at 04:36 PM.
#131
08-04-2012, 04:50 PM
 DrCube Guest Join Date: Oct 2005
I haven't read every post in this thread, but most of them.

I want to say that erik isn't necessarily wrong. He doesn't strike me as your average high schooler who just can't swallow what his math teacher told him about .999... equalling 1. He's apparently thought a lot about this, and has gotten to the heart of mathematics.

He's right that there is no proof. He's right that it all boils down to limits. Here's the deal: the part of mathematics we're talking about here doesn't have anything to do with proofs. It's about definitions. The definitions are inspired by our intuition about numbers and measurement. Calculus was simultaneously the shining pinnacle of math and it's biggest black eye for 300 years until a rigorous definition of limits was settled on in the mid-19th century. The definition of limits was inspired by our "wishy-washy" notions about calculus throughout that time period. Afterwards, calculus was made rigorous and became analysis. Make no mistake, 0.999... DOES equal 1 under conventional real analysis. And under the same system, neither "infinity" nor "1/infinity" is a number.

These definitions make proofs possible. You can't prove "this equals that" without rigorously defining "this" and "that" (and "equals" for that matter). As with any bit of logic, you're welcome to dispute the underlying definitions and dismiss the conclusions accordingly.

Here's where you're wrong, erik: You demand a proof of something there is no proof of. You're demanding a proof of limits where there is only a definition. Accept it or not. IF you accept the definition of limits (and the definition of decimal numbers based on limits), then 0.9999.... = 1. It can be proven and has been in this very thread.

If you don't accept it, that's your call. But do us a favor, okay? DEFINE 1/infinity. Work out the concequences of that definition along with other mathematical definitions that you DO accept. Are the consequences consistent? What exactly DOES 0.999... mean in your system of math? And ultimately, are the consequences of your definition interesting? Are they useful? If so, congrats, you're doing real math, the kind mathemeticians do, not engineers or scientist or businessmen.

But right now, it seems like you're saying "I don't accept your definitions so I'll argue over it". That doesn't work. People who don't agree on the same premises can't argue logically with each other, only emotionally. So what you need to do is put your money where your mouth is and give us the premises you DO accept. Honestly, if you don't accept the definition of limits and decimal notation of numbers in this case, there's not anywhere else to go. So if you want this thread to go anywhere, put up your definitions so we can argue with them. Otherwise, this thread is just a one-sided "nanny nanny boo boo, I don't believe you" type of thing.

Just please, please stop demanding proofs of definitions. It doesn't make sense.
#132
08-04-2012, 05:18 PM
 erik150x Guest Join Date: Aug 2012
Thanks, Dr. Cube for a very satisfying response.

My bad was indeed not comprehending that .999... = 1 is built into the works and thus a given instead of something proven.

I will definitely get back to you all when I finish my incompleteness theorem on limits and the decimal notation system, It might be a while though, so be patient. ;-)

I also hope everyone heeds your advise similarly and stops trying to provide proofs for definitions, which only serve to confuse poor lost math souls like my self.

Last edited by erik150x; 08-04-2012 at 05:21 PM.
#133
08-04-2012, 05:23 PM
 DrCube Guest Join Date: Oct 2005
I should mention that you have your work cut out for you. A system of math WAS developed that defines and uses "infinity" and "1/infinity", called hyperreals, or "non-standard analysis".

In that system 0.999... STILL equals 1, so you're going to have to come up with something new. Basically, you're going to have to redefine what decimal numbers mean and hope what you come up with is constent.
#134
08-04-2012, 05:31 PM
 DrCube Guest Join Date: Oct 2005
Quote:
 Originally Posted by erik150x I also hope everyone heeds your advise similarly and stops trying to provide proofs for definitions, which only serve to confuse poor lost math souls like my self.
To be fair, the definitions are about limits and decimal numbers. IF you accept the definition of the decimal representation of real numbers, which is based on limits, THEN you can prove that 0.999... = 1. Which is what everybody in this thread was trying to do.

What you were saying is "prove it without limits" which is basically equivalent to "redefine decimal numbers". If you want to come up with a new definition of decimals, that's your job, not ours.
#135
08-04-2012, 05:51 PM
 erik150x Guest Join Date: Aug 2012
Quote:
 Originally Posted by DrCube To be fair, the definitions are about limits and decimal numbers. IF you accept the definition of the decimal representation of real numbers, which is based on limits, THEN you can prove that 0.999... = 1. Which is what everybody in this thread was trying to do. What you were saying is "prove it without limits" which is basically equivalent to "redefine decimal numbers". If you want to come up with a new definition of decimals, that's your job, not ours.
Duly noted. I was not aware that currently the very definition of decimals depends on the definition of a limit.

I am wondering at the time of writing this was that a choice of necessity or convenience? If a necessity, why? For example what I would like to say... and NOT doubting its true... but would like to see an example of how not defining repeating decimals in this way leads to failure or contradiction. Honestly not trying to argue the point, just would like to see a demonstration of why it is better for all if we define .999.. or any other .xxx... as the limit of a geometric series? I will certainly look into it myself, but if there is something quick that makes this point obvious here, that would be nice to see.
#136
08-04-2012, 06:00 PM
 Leo Bloom Member Join Date: Jun 2009 Posts: 4,565
[quote=erik150x;15347662]To put this more succinctly...

Saying that the sum geometric series .999... converges to 1, is to say that there is a limit for that sum which is 1. Saying that there is a limit for that sum is saying that it can be proven that .999... is as close to 1 as I wish to prove. Yet you cannot prove it IS one. Right?

Well, heck, I could have told you .999... is a close to 1 as you can get without any calculus. Well that's not counting the new number I came up with, described as:

(1 - .999...)/2 + 1 - .999... ...

which is halfway between .999... and 1.[ital. Leo]

Quote:
 Originally Posted by Surgoshan Well, how do you distinguish between two points (in this case one and .999...)? Well, if there's a point between them, then they're separate points, otherwise they're the same point, right? Okay, no equations, just the simple fact that there is not a single point between .999... and 1. Not a one.[ital. Leo] [snip]
This rings a bell in my layman head to the matter of point fields in modern physics. If this isn't a big hijack, could someone give a quick comment?

Last edited by Leo Bloom; 08-04-2012 at 06:01 PM.
#137
08-04-2012, 06:20 PM
 President Johnny Gentle Guest Join Date: May 2007
Quote:
 Originally Posted by Leo Bloom This rings a bell in my layman head to the matter of point fields in modern physics. If this isn't a big hijack, could someone give a quick comment?
There's no physics involved. "Point" just means "number" here. This is because we're fundamentally dealing with the geometry of the real number line, and the arithmetic properties of the numbers are secondary. In working in geometric situations, the word "point" is much more commonly used.
#138
08-04-2012, 06:28 PM
 Jragon Member Join Date: Mar 2007 Location: Miskatonic University Posts: 7,069
Quote:
 Originally Posted by Leo Bloom This rings a bell in my layman head to the matter of point fields in modern physics. If this isn't a big hijack, could someone give a quick comment?
Physics is a bit different than pure mathematics, in physics there is a quantifiable "minimum interval" for most things, usually called the "Planck <x>". So the planck length is the minimum possible distance between two points, a limitation the physical universe has but math doesn't share.

Okay, okay, that's theoretical. But the idea that spacetime is ultimately discrete at the smallest level is at least one popular interpretation of the notion of the Planck length (another is that there is space smaller, but it's basically impossible to tell otherwise, even with correct instruments). There's also a bunch of caveats I don't understand such as "if large extra dimensions exist[...] the planck length has no fundamental physical significance" (Wikipedia), but either way, there may or may not be a fundamental difference between a literal physical point and the continuity of the real numbers depending on whether the universe is discretized at the level of the Planck Length.

Last edited by Jragon; 08-04-2012 at 06:28 PM.
#139
08-04-2012, 06:33 PM
 Senegoid Guest Join Date: Sep 2011
Quote:
 Originally Posted by erik150x Duly noted. I was not aware that currently the very definition of decimals depends on the definition of a limit. I am wondering at the time of writing this was that a choice of necessity or convenience? If a necessity, why? For example what I would like to say... and NOT doubting its true... but would like to see an example of how not defining repeating decimals in this way leads to failure or contradiction. Honestly not trying to argue the point, just would like to see a demonstration of why it is better for all if we define .999.. or any other .xxx... as the limit of a geometric series? I will certainly look into it myself, but if there is something quick that makes this point obvious here, that would be nice to see.
So, we've all been trying to beat into your skull the importance of NOT relying on intuition, just as our math teachers beat it into our skulls.

Here's a more practical approach: Intuition can sometimes work well for giving us some prospective starting points for some line of mathematical development. For example, in the days of Newton and Leibniz, they developed intuitive notions of limit, continuity, and differentiability, and then went on to develop the entire Calculus on top of that. And throughout the whole body of Calculus, they developed formulas that seemed to work. That is, where the same things could be computed by older simpler formulas, the new-fangled formulas always gave the same answers. Then, they used the same techniques to develop formulas for things that couldn't have been computed before, like areas under strangely shaped curves. But then how would you ever know if the formulas were giving you the right answers? Well, you could chop up the area into little squares to get an approximate answer, and note that the formulas always came real close to that. But dammit, mathematicians wanted to PROVE that their formulas worked, and for centuries they COULDN'T!

They needed precise definitions just to know what they were working with. Without that, there could be no tools for proving things.

The general pattern went like this:
IF you have [certain conditions], THEN you know (or can prove) that you have [certain other conditions] along with it.

So you had to know exactly what certain conditions you have to begin with.

So, they had an intuitive idea of limits and continuity. But they didn't know exactly how to describe it. That is, they didn't know exactly what condition they had, in a way that they could use to develop proofs of anything. What exactly did they need to prove? Like Justice Potter Stewart's observation that he couldn't define pornography, but he knew it when he saw it. (And look how debatable THAT has always been!)

Finally, someone came up with a precise definition of a limit that seemed to cover everything that everybody always intuitively "knew". And surprise, surprise: It was arcane! It was the epsilon-delta definition. It took a while to wrap one's mind around -- but it clearly expresses all the conditions that everyone seemed to mean (or wanted to mean) when they talked about limits. From this, a precise definition of continuity was built. And a precise definition of differentiability. By stating precisely what conditions you have when you have a limit, you then have some facts that you can build proofs with.

Note that once you start doing that, it can cease to be intuitive. The e-d definition wasn't intuitive, and took a few centuries to come up with. The definition of continuity was likewise counter-intuitive: Whoda thunk that you would first define continuity at a single point, and then over an interval?

Okay, that's why you need definitions. Not just any old definitions (as you may have been taught). Definitions that you can actually do useful work with. And that's also why you can't just give glib definitions to things like 1/0 or 1/infinity. Hey, anything divided by itself = 1, right? So let's just define 0/0 = 1 so then it will act like any other n/n, and a whole lot of problems go away!

I'll do a separate (maybe shorter) post on how these ideas apply to infinitely long decimal fractions.

ETA: Oh, and by the way: So how well DID those integration formulas for curvy areas work out after all? Well, it was hard to say. Turned out, nobody really had a definition for "area", so the integrations formulas didn't really have any "right" or "wrong". But they seemed to give intuitively correct (or at least close) answers. So, mathematicians did the mathematical thing: They DEFINED the area to be whatever answer those formulas gave! Suddenly, as if by magic, all those integration formulas were exactly unfalsefiably precisely right!

Last edited by Senegoid; 08-04-2012 at 06:37 PM.
#140
08-04-2012, 06:59 PM
 Senegoid Guest Join Date: Sep 2011
Okay, so now: What do you gain by defining decimal fractions as the limit of a sum of terms, that you couldn't have done before?

Well, as we've discussed already, an infinite decimal (and let's just be clear: by infinite decimal, we mean one with infinitely many digits, not a fraction of infinite value. Okay?) doesn't have any meaning that you can get at in the "usual" way. A finitely-long fraction is defined as the sum of a specific sequence of terms. We discussed that already.

It seemed to make good sense, intuitively and even empirically, to think of an infinite decimal as the sum of infinitely many terms. But (again, as mentioned above), it's not so easy.

And again, as mentioned, adding up an infinite series is NOT like ordinary addition. It doesn't work. It needs to be defined, in some way that gives a satisfying result. Here's your first clue that there's a problem: Addition of infinitely many terms is not necessarily commutative!

It's true! The most obvious examples (and the only ones I can remember all these years later) comes up with alternating series -- where the terms are alternately positive and negative. Suppose you try to add up all the positive terms into one sum, and all the negative terms into a separate sum, and then add those. It might not work! Or if you just wrote the series with all its positive terms first, followed by all its negative terms. Wait a minute! If you wrote all the positive terms first, there are infinitely many of those, and you'd never even get to the negative terms! Oops.

Clearly, we need to have some definition of what the sum of an infinite series is! A definition that gives us the "right" answer (in cases where we can independently determine the right answer), and that seems to agree with our intuition of what it means to add up a bunch of number, and so on. (But it turned out, we had to give up on keeping it commutative.)

That's where the limit of a sequence of partial sums came in. We don't know how to add infinitely many terms (we don't even know what that means, yet), but we can add up finitely many terms. So create that sequence of partial sums (as discussed already) and see where it goes. If the sequence approaches a limit, then define the sum to be that limit. That's how it's done. ETA: And, this process makes fairly clear, I think, that if you re-arrange the terms, thus changing the sequence of partial sums, all bets are off about what the limit might be, if any. The order of the terms does matter!

This gives a definition that we can actually work with. That's what you're missing if you don't have that definition. If you have an infinite series that converges, then you have a limit. And we know by now what a limit is and how to work with it. Bingo! That gives us the meaning and the tools to work with series that we didn't have before. We have tools for computing (or "proving") the value of such a series. And when we apply those tools, one of the results is ... wait for it! ... .99999.... = 1

Last edited by Senegoid; 08-04-2012 at 07:03 PM.
#141
08-04-2012, 07:00 PM
 President Johnny Gentle Guest Join Date: May 2007
Quote:
 Originally Posted by erik150x Duly noted. I was not aware that currently the very definition of decimals depends on the definition of a limit. I am wondering at the time of writing this was that a choice of necessity or convenience? If a necessity, why? For example what I would like to say... and NOT doubting its true... but would like to see an example of how not defining repeating decimals in this way leads to failure or contradiction. Honestly not trying to argue the point, just would like to see a demonstration of why it is better for all if we define .999.. or any other .xxx... as the limit of a geometric series? I will certainly look into it myself, but if there is something quick that makes this point obvious here, that would be nice to see.
Well, not a geometric series, merely an infinite one.

The problem is that dealing with infinity often clashes with intuition. Cantor showed that one or the other had to go, and since infinity was a useful notion, intuition got the boot.

Here, we're asking how to give meaning to an infinite string of digits. Well, how do we give meaning to a finite string of digits? The answer is by treating them as fractions, in the following manner:

167.8945 = 1∙102 + 6∙101 + 7∙100 +8∙10-1 + 9∙10-2 + 4∙10-3 + 5∙10-4

If you're content to deal with only rational numbers, you can extend to an infinite string without worrying about limits, because each rational number will produce a repeating sequence or terminate after some point. However, if you wish to address a number like √2 or p, more work is necessary.

This raises the question of how we can treat it. In the case of √2, we can easily find a sequence of numbers approaching √2 from either side. For example, since 12 < 2, 1.42 < 2, 1.412 < 2, etc., we can define a sequence of finite decimals 1, 1.4, 1.41, 1.414,... each of which is smaller than √2. Similarly, each of 2, 1.5, 1.42, 1.415,... is larger than √2. This implies that the "correct" expression should look like 1.41421...

How can we interpret this? Well, let's try the same way as earlier, so √2 = 1∙100 + 4∙10-1 + 1∙10-2 +4∙10-3 + 2∙10-4 + 1∙10-5 + ...
Now, we just add them up, right? Not so fast. Addition as defined for rational numbers is a "binary operation," which means that it takes in two numbers, and spits one back out. By repeating the process multiple times, associativity tells us we can add any finite amount of numbers. But there are infinitely many numbers to add in the expansion above. Because of this, we must define a new meaning for addition, one which allows us to add an infinite amount of something. This is easier said than done.

For example, we can add any finite number of ones: 1 + 1 + 1 + ... + 1. If we try to carry on this sequence forever 1+ 1+ 1+ 1 + 1... we clearly run into difficulty, since this is larger than any number it is possible to conceive of.

Well, what's the difference from our earlier sequence? Well, one obvious difference is that the summands in the expansion of √2 each got closer and closer to zero. So let's just agree to throw out any infinite sum where that doesn't happen. That's not quite enough, since 1 + 1/2 + 1/3 + 1/4 + 1/5 + 1/6 + ... actually runs into the same problem as with the infinite sum of 1s. How can we differentiate between this and the expansion of √2? This is a lot trickier. The most common method was established in the middle of the 19th century. This was the introduction of limits. Only then was the question of what a infinite decimal expansion actually meant made into a well posed question.
#142
08-04-2012, 07:07 PM
 erik150x Guest Join Date: Aug 2012
Hey Senegoid,

Thanks for all your response. Your obviously a very intelligent person. You don't have to talk down to me like that. I have an MS in Computer Science, also studied physics for quite some time and have always had an avid interest in all the sciences including math. I know perfectly well intuition is always right. It's one of my favorite things about the Einsteins theory's of Relativity.

I lost sight, or even perhaps never totally internalized the modern definition of decimal notation relying on limits. I did not see how .999... necessarily equals 1. I wasn't arguing simply because it was not intuitive to me, I had read some of the proofs given here and others and just still wasn't convinced. But if the very definition of rational numbers includes this more or less, then it is silly to argue.

So are you trying to beat into my head that we should all just accept things without question?
#143
08-04-2012, 07:07 PM
 Senegoid Guest Join Date: Sep 2011
I see that President Gentle wrote just about the same essay that I just did, about defining a new meaning for addition of infinite series, and at just the same time!

GMTA!
#144
08-04-2012, 07:13 PM
 Senegoid Guest Join Date: Sep 2011
Quote:
 Originally Posted by erik150x Hey Senegoid, Thanks for all your response. Your obviously a very intelligent person. You don't have to talk down to me like that. I have an MS in Computer Science, also studied physics for quite some time and have always had an avid interest in all the sciences including math. I know perfectly well intuition is always right. It's one of my favorite things about the Einsteins theory's of Relativity.
Oh yes I do! The Devil makes me do it!
#145
08-04-2012, 07:17 PM
 erik150x Guest Join Date: Aug 2012
Quote:
 Originally Posted by Senegoid Oh yes I do! The Devil makes me do it!
Can you prove that?
#146
08-04-2012, 07:21 PM
 Ludovic Charter Member Join Date: Jul 2000 Location: The Black Parade is dead! Posts: 21,618
Quote:
 Originally Posted by erik150x Your obviously a very intelligent person.
I have a few very intelligent people too!
#147
08-04-2012, 07:38 PM
 Leo Bloom Member Join Date: Jun 2009 Posts: 4,565
Quote:
 Originally Posted by Senegoid .... GMTA!
#148
08-04-2012, 07:44 PM
 erik150x Guest Join Date: Aug 2012
So what I gather from PJG's explanation and from some other articles is that the definition we give to rational numbers in some way elegantly helps us talk about and use irrational numbers. Before this in the mid 1900's many mathematicians used used infinitesimals, but they were "messy" or "troublesome"? With the introduction of Limits this somehow provided a not only a consistent way of describing rationals and irrationals... it was elegant. Now, one I suppose can argue the relative elegance of the limit definition... I will not! Perhaps one can at least say vastly more elegant than infinitesimals. I have also read somewhere that part of the move from infinitesimals to limits was motivated greatly by the general dislike of the very vague notion of an infinitesimal. I certainly won't argue that either, but at some level in the back of my mind I had this formulation that .999... = 1 was in some essence the preference to limits over infinitesimals, and perhaps not a necessary one but a preferred or more convenient one. To some degree this idea sticks with me, but I guess I need to further understand these issues.
#149
08-04-2012, 07:52 PM
 erik150x Guest Join Date: Aug 2012
I know I can be irritatingly stubborn.

I once had a 2 week long discussion of the chicken or egg question with a biologist. We were both arguing from an evolutionary perspective, but yet could not agree. He said chicken, I egg. It was an interesting discussion and in the end I think we agreed the question is not well defined enough to say exactly what is a chicken and what is an egg to answer. It is interesting to me to see how these things often come down to semantics or how you define things. Which in many cases can be somewhat arbitrary.
#150
08-04-2012, 08:27 PM
 President Johnny Gentle Guest Join Date: May 2007
Quote:
 Originally Posted by erik150x I have also read somewhere that part of the move from infinitesimals to limits was motivated greatly by the general dislike of the very vague notion of an infinitesimal.
Vagueness is exactly it. The problem with infinitesimals is that they weren't given a rigorous grounding until Robinson created non-standard analysis in the 1960s.

Infinitesimals worked fine for Newton and Leibniz. For example, the argument for giving the derivative of y=x2 is fairly elegant and standard:

If x increases by some non zero amount Dx, then y increases by the difference between x2 and (x + Dx) 2. This is Dy = (x2 + 2xDx + (Dx)2) - x2 = 2xDx + (Dx)2 = Dx(2x + Dx).

This means that the slope is Dy/Dx = 2x + Dx. But this implicitly uses the fact that Dx is nonzero, or else we couldn't divide by it.

Now, in order to find the derivative, Leibniz argued that Dx could be replaced by zero, so the derivative is the expected 2x. The problem is that the process required Dx to be simultaneously zero and nonzero. There is obviously no such number. Mathematicians tried to hold on to this formulation for a century, but it became more and more difficult as calculus evolved into multiple dimensions and the complex numbers.

Cauchy introduced the limit as an attempt to introduce rigor into the foundations of calculus, and other mathematicians picked up the ball and ran with it. The introduction of rigor allows the formulation of questions to ask that couldn't be conceived prior. It is certainly possible to put infinitesimals on a similar footing, and in fact, as mentioned above, they were in the 1960s. However, this required the introduction of a new number system - the hyperreals. There's been at least one introductory calculus textbook centering on this system to the exclusion of limits, but IMO, students actually have an easier time understanding the algebra of limits than they do the algebra of the hyperreals.

Long story short: infinitesimal arguments are fine, but the historic arguments lack the rigor that is expected in modern (post-1900) mathematical work. (I'll leave aside for the moment the issue as to why such rigor would be desired.)

 Bookmarks

 Thread Tools Display Modes Linear Mode

 Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is Off HTML code is Off
 Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home Main     About This Message Board     Comments on Cecil's Columns/Staff Reports     Straight Dope Chicago     General Questions     Great Debates     Elections     Cafe Society     The Game Room     In My Humble Opinion (IMHO)     Mundane Pointless Stuff I Must Share (MPSIMS)     Marketplace     The BBQ Pit Side Conversations     The Barn House

All times are GMT -5. The time now is 03:12 AM.