Okay…i’ve just gotten into an argument with my roommate about the fundamentals of calculus. He claims that he disproves the concept of calculus by claiming limits don’t exist. He uses some proof which i don’t remember the entirety of, but which is based on the “proven” theorem the 0.9 repeating equals 1. I have problems with this theorem. Point nine repeating approaches 1 as the number of nines approaches infinity, which is the definition of a limit in and of itself, i think. Can someone help me out. He claims that .9 repeating equals one is well documented but he’s not in any state to find any documentation right now, and I can’t find anything on the web.
x = 0.999…
10x = 9.999…
10x - x = 9.999…-x
10x - x = 9.999…-0.999… (x = 0.999…, remember)
9x = 9
x = 1
and thats the way it is…
Yeah, .9 repeating does equal one; it’s come up several times before on the board. It always seems to result in long threads, simply because it surprises many people who then refuse to believe it, for whatever reason. Here are a few I found on a quick search.
http://boards.straightdope.com/sdmb/showthread.php?threadid=82064
http://boards.straightdope.com/sdmb/showthread.php?threadid=32760
http://boards.straightdope.com/sdmb/showthread.php?threadid=15863
http://boards.straightdope.com/sdmb/showthread.php?threadid=15832
Anyway, so how is your roommate using this to “disprove” calculus?
I don’t think it’s possible to claim that limits don’t exist, because they are defined so simply. Possibly, your roommate doesn’t know what a limit is. At least, if he’s like other people I know who have claimed similar things, then he doesn’t know what a limit is. We have to look at what is meant by 0.999… Start with the sequence:
{0.9, 0.99, 0.999, 0.9999, 0.99999, … }
0.999… is defined as the limit of this sequence.
Does that mean that 0.999… is defined as the last number of this infinite sequence? No; that would be silly. Yet I get the impression that that’s what some people think a limit is. In fact, the limit of this sequence is not in the sequence at all! The limit is defined (roughly) as the number which the sequence gets arbitrarily close to. In this case, it can be shown (and in my opinion, it’s pretty clear) that the sequence gets arbitrarily close to 1. So, what 0.999… is defined as turns out to be 1.
Now, it’s possible to show this using straightforward algebraic techniques, and if you read those previous threads, you’ll see it done. (Whether people in those threads accept the proofs is another matter.) I personally don’t think it’s a good idea to deal with infinite sequences without bringing Calculus into the picture, but in this case, it works out fine.
As others have shown, 0.999… does equal 1. I don’t see how this invalidates the concept of a limit though. The definition of a limit that I learned was that for a function f(x), a limit, L, exists at a particular value of x (call this value c) if it can be shown that as x approaches c:
- the value of f(x) converges to a limit L such that |x-c| < delta and |(f(x) - L)| < epsilon
- epsilon is nonzero whenever delta is nonzero.
To apply this to your roomate’s example:
Let f(x) = x/9. We’re looking for the limit at x = 9, and f(9) = 1. So,
Let c = 9.
Let L = 1.
First, investigate what happens at x = 8.9. |x-c| = 0.1, |f(x)-L| = 0.0111… delta is nonzero and epsilon is nonzero. So the limit of (x/9) as x -> 9 seems to exist. Repeat for other sample values of x as necessary to convince yourself that anytime delta is nonzero, epsilon will be nonzero as well.
Now, take x = 9, which is basically what your roomate seems to be arguing. At exactly x = 9, yes, epsilon is indeed equal to zero; i.e. 0.999… = 1. However, at x = 9, delta is also zero. Since delta is zero at precisely x = 9, there’s no reason to expect or require that epsilon will also be zero. So part 2 of the definition is not violated, and the limit exists.
All this really shows is that it’s important in calculus to differentiate between the limiting value of a function as it approaches a point and the true value of the function at that specific point. In this case, the limiting value and the value of the function at the point are both the same, but this is not always the case with all functions.
Rereading my post, I realized I was being sloppy with my notation. Part 2 should really read: 2. epsilon exists whenever delta exists. In my mind I usually turn those less-than signs into equals and just call |x-c| delta and |f(x) - L| epsilon, so I can then say epsilon should be nonzero when delta is nonzero. Gets you to the same conclusion, but it isn’t strictly correct.
Cation writes:
> He claims that .9 repeating equals one is well documented
> but he’s not in any state to find any documentation right
> now, and I can’t find anything on the web.
When your roommate sobers up, ask him to carefully explain his proof that limits don’t exist and then please post that explanation to this thread. There’s no point in us trying to find the holes in a proof that we haven’t even been given. We also don’t need to rehash the fact that .999… equals 1, since that’s been done on many other threads. We just need to hear his proof that limits don’t exist.
Just bring up non-continuous functions, that turn undefined at a point, but have a definite value very, very close to that point. Unfortunately, I can’t think of an example right now…but I’ve done a bunch…maybe the real math guys will come up with one.
Perhaps his roommate does not believe in limits to drinking.
There’s a limit if f(x0 converges to a limit? Sounds circular to me.
So constant functions don’t have limits (epsilon is always zero)?
Jman: how about (e^x-1)/x?
“There’s a limit if f(x) converges to a limit? Sounds circular to me.”
“So constant functions don’t have limits (epsilon is always zero)?”
Here’s how I’ve always heard it. Hopefully this will clear up those two objections:
Given: a function f(x), and a point x[Sub]0[/Sub]. If there exists a finite L such that: for any [Sym]e[/Sym] > 0, there exists an [Sym]d/Sym such that 0 < |x - x[sub]0[/sub]| < [Sym]d[/Sym] implies |f(x) - L| < [Sym]e[/Sym], then the limit of the function f(x) at x[sub]0[/sub] is defined as L. Otherwise, the limit there is undefined.
I know I said it was defined simply earlier, and it really is! It’s just hard to say in English.
The Ryan, you are absolutely correct, which is why the concept of a limit is not defined specifically by saying delta and epsilon are nonzero, but rather, by saying that they exist and are nonzero such that |x-c| < delta and |f(x) - L| < epsilon. As I said, that was just my poor way of thinking about limits; sorry for the sloppy notation.
To address your two questions, the definition isn’t circular in reasoning. Basically, it’s not “there’s a limit if f(x) converges to a limit at a point c”, it’s “there’s a limit if f(x) converges to a limit at a point c that meets the requirements of the delta/epsilon definition.” Going back to my example of f(x) = x/9 as x -> 9, there exists a limit, and that limit is 1. Not 2, or 3, or 0.5, but precisely 1.
Constant functions do have limits. Using the appropriate definition, |f(x) - L| = 0 < epsilon. You can choose any value of epsilon you want, since it will always satisfy 0 < epsilon, so the limit exists.
Rather than bumble my way further through a poor explaination, I offer up this site which explains the definition of a limit far better than I can.
Achernar’s definition of a limit is correct, only he forgot to include, “there exists a [Sym]d/Sym which is greater than zero”.
Technically, the way he had it written (omitting the “greater than zero”), any function would have every possible limit everywhere.
Yes indeed! A subtle yet important point. Thank you Cabbage. I’m actually surprised that I got even close to the correct definition. Also, I see, I messed up the indefinite article. Let me try again:
Given: a function f(x), and a point x[Sub]0[/Sub]. If there exists a finite L such that: for any [Sym]e[/Sym] > 0, there exists a [Sym]d/Sym > 0 such that 0 < |x - x[sub]0[/sub]| < [Sym]d[/Sym] implies |f(x) - L| < [Sym]e[/Sym], then the limit of the function f(x) at x[sub]0[/sub] is defined as L. Otherwise, the limit there is undefined.
One thing I want to point out, which may not be obvious, is that if the limit exists, it is unique. Also, in general, to prove that a limit exists, you have to give an L, construct a [Sym]d/Sym, and show that |x - x[sub]0[/sub]| < [Sym]d[/Sym] implies |f(x) - L| < [Sym]e[/Sym].
I may be missing something significant here, but…
any number of digits of the form 0.999… will always have some remainder when subtracted from 1.
I believe all thats been proven is that the LIMIT of this series is 1.
If I’m just repeating what Achernar and others have stated I appologize.
Exactly right (as long as we’re taking “number” to mean finite).
See, that’s the thing:
.9 is different from 1.
.99 is different from 1.
.
.
.
.99999999999999 is different from 1.
.
.
.
and so on.
Now, suppose we talk about .9 repeating; i.e., the 9’s go on forever. It’s not so clear that this is different from one, now is it? In fact, as you’ve mentioned, in this thread it’s been proven that the limit of this series is 1. “What the hell does that mean?”, you, and others, may ask.
So what does it mean when you have a decimal number whose digits (after the decimal) never stop? That’s the real quesion, I believe. What does [Sym]p[/Sym] “mean”? The digits never stop, do they? Does it really ever settle on any particular value?
Yes, and a way to think of it is to be aware that the real numbers are, to put it in layman’s terms, all of the different possible limits of the rationals.
Take [Sym]p[/Sym] again, for example. By the standard construction of the real numbers, we can think of [Sym]p[/Sym] as the limit of a sequence of rational numbers:
3, 3.1, 3.14, 3.141,…
Think of each digit as given you a bit more information on the value of [Sym]p[/Sym]:
3…So we now know [Sym]p[/Sym] is between 3 and 4 (inclusive).
3.1…Now we know [Sym]p[/Sym] is between 3.1 and 3.2.
3.14…So [Sym]p[/Sym] is between 3.14 and 3.15.
3.141…[Sym]p[/Sym] is between 3.141 and 3.142.
And so on. Each successive digit gives us more information on the value of [Sym]p[/Sym]. Taken as a whole, this infinite sequence of decimal digits precisely determine [Sym]p[/Sym]; they tell us exactly which rationals are larger than [Sym]p[/Sym], which rationals are smaller than [Sym]p[/Sym]; they tell us exactly where [Sym]p[/Sym] fits on the real number line. The actual value of [Sym]p[/Sym] is the limit of this sequence.
Similarly, the actual value of .9 repeating is the limiting value of the sequence of it’s digits (as I described above for [Sym]p[/Sym]):
.9…We know .9 repeating is between .9 and 1 (again, inclusive).
.99…Now we know it’s between .99 and 1.
.999…It’s between .999 and 1.
.
.
.
.99999999999999…So .9 repeating is between .99999999999999 and 1.
And so on. This sequence shows that .9 repeating is bigger than any rational number less than one. It’s also smaller than any rational number larger than one. These two criteria show that .9 repeating is, in fact, precisely one.
Sounds like the OP’s roommate is confused about the distinction between a number and its representation. There’s only one real number with a value of 1, but there are many different ways of writing it. Same with any other real number.
Using the delta epsilon method how would you prove that the sin(1/x) doesn’t have a limit as x approaches 0?
You would have to prove the negation of this statement: There exists a finite L such that: for any [Sym]e[/Sym] > 0, there exists a [Sym]d/Sym > 0 such that 0 < |x| < [Sym]d[/Sym] implies |sin(1/x) - L| < [Sym]e[/Sym].
The negation of that statement is: For every L there is some [Sym]e[/Sym] > 0 such that: for all [Sym]d/Sym > 0, there is an x such that 0 < |x| < [Sym]d[/Sym] and |sin(1/x) - L| [Sym]³[/Sym] [Sym]e[/Sym].
In this case, for every L, [sym]e[/sym] = 1/2 will fit the bill. Then, it doesn’t matter what value of [Sym]d[/Sym] you pick, there will always be an 0 < x < [Sym]d[/Sym] with |sin(1/x) - L| > [Sym]e[/Sym]. Let me demonstrate:
Firstly let N be the least integer greater than 1/[Sym]d[/Sym]. Since [Sym]d[/Sym] > 0, this is guaranteed to be finite, and we have 1/N < [Sym]d[/Sym].
If L < 0, then pick x = 1/(2[Sym]p[/Sym]N + [Sym]p[/Sym]/2) < 1/N < [Sym]d[/Sym]. And |sin(1/x) - L| = |sin(2[Sym]p[/Sym]N + [Sym]p[/Sym]/2) - L| = |1 - L| = 1 - L > 1 > 1/2 = [Sym]e[/Sym].
If L [Sym]³[/Sym] 0, then pick x = 1/(2[Sym]p[/Sym]N + 3[Sym]p[/Sym]/2) < 1/N < [Sym]d[/Sym]. And |sin(1/x) - L| = |sin(2[Sym]p[/Sym]N + 3[Sym]p[/Sym]/2) - L| = |-1 - L| = 1 + L [Sym]³[/Sym] 1 > 1/2 = [Sym]e[/Sym].
That was pretty nifty Achernar. Although I would’ve probably figured it out myself as t -> infinity.