So I have this [horrible] math book, and from it (and my professor) I’m trying to learn about functions of several variables. Unfortunately, it has a tendency to not go into specifics, something I really hate in a textbook. Specifically, I’m looking at the definitions of limits of functions of several variables. The book gives the example of
As you can probably see, this equation has a removable discontinuity at (0,0). The limits lim[sub]x->0[/sub]f(x,0), and lim[sub]y->0[/sub]f(0,y) exist, but the function is not continuous at (0,0). Therefore, partial derivatives exist for both x and y, even though the function itself is not continuous. Is it possible to have such a function, so that partial derivatives exist for all variables, but the function itself be discontinuous, without having a removable discontinuity?
I really hope this makes sense. It’s way too late, but I hate going to sleep not understanding something I’m supposed to know.
If I recall my calculus correctly, the only types of discontinuities for real functions are removable discontinuities (like the one you mentioned) and poles (where |f(x, y)| goes to infinity as you approach a point (x[sub]0[/sub], y[sub]0[/sub])). In the case of a pole, the partial derivatives shouldn’t exist at that point. Still, you should wait for someone who remembers this a little bit better before deciding the matter is settled.
In multiple dimensions, you need more than just that the limit is the same along the paths x=0 and y=0. For a limit of a multi-dimensional function to exist, the limit has to exist and be the same for every path to the point. Think about that for a bit, and you’ll also have the answer to the question you actually asked.
To reinforce this, the limit has to exist and be the same along every path to the point, not just the straight lines. There exist functions that go to zero along every straight line through the origin but go to 1 along the parabola y=x[sup]2[/sup], for instance.
This is my biggest complaint about the way calculus instruction has gone. When I took calc 1 we did the real definition of continuity for functions of a single variable. When we got to multivariable calculus we could do the real definition there. In practice, talking about the limit along “every (rectifiable) path” is impossible to verify. But students today don’t get the real definition of a limit in calc 1 so we can’t do the real definition in calc 3.
Okay, so here it is. Where I write “e” read “epsilon”, and where I write “d” read “delta”.
The function f(x,y) has limit L at (x[sub]0[/sub],y[sub]0[/sub]) if and only if:
For every e>0 there exists a d>0 such that |sqrt((x-x[sub]0[/sub])[sup]2[/sup]+(y-y[sub]0[/sub])[sup]2[/sup])|<d implies |f(x,y)-L|<e
This means that if the limit exists and equals L, then I can be sure the function takes a value as close as I want to L by taking my input to be close enough to (x[sub]0[/sub],y[sub]0[/sub]).
Note that the function you quoted is an example of such a function, so yes.
If the partial derivatives exist and are continuous in a neighbourhood of a point, then the function itself must be continuous at that point. In your example, the partial derivatives are not continuous at (0,0).
Out of curiousity, what’s the textbook? (I’ve taught multi-variable calc a few times and have strong opinions about one particular text; I’m curious if it’s the same book.)
Quite frankly you need to look at a larger sample set. I’ve taught both courses many times in the United States and have always discussed the epsilon-delta definition.
(And to be perfectly honest usually I move on as quickly as possible, after spending about a lecture bridging the gap from epsilon-deltas to the intuitive idea of a limit. For students who aren’t going on to major in mathematics the epsilon-delta definition has very little practical utility and is complicated enough to be a genuine obstacle to understanding. It’s like the foundation of a house; important, yes, but you don’t need to look at it every day unless it’s your job to do so.)
The function f(x,y) has limit L at (x[sub]0[/sub],y[sub]0[/sub]) if and only if:
For every e>0 there exists a d>0 such that 0<|sqrt((x-x[sub]0[/sub])[sup]2[/sup]+(y-y[sub]0[/sub])[sup]2[/sup])|<d implies |f(x,y)-L|<e
Maybe it’s my physics/engineering background, but the epsilon-delta definition always made better sense to me. In plain English, it’s:
If you give me the error you’re willing to tolerate on a measurement of the function at a location, I’ll tell you how close to the location you need to be to stay within that error. If I can do that for any error you give me, no matter how small, then there’s a limit, otherwise not.
Any budding scientist or engineer had better be able understand that, because error tolerance is a fundamental concept they’ll have to grok sooner or later.
I thought that the real definition was “A function f is continuous iff, for every open subset S of the range, f[sup]-1/sup is an open subset of the domain”. Shame on you, Mathochist, for teaching your students things they’ll just have to unlearn later.
I agree. In fact, error tolerance was one of the applications I had to learn when I was learning this. It’s simply being left off of more and more syllabi as time goes on.
And IMO it needs to die quickly, particularly at the high school level, but even in the introductory college undergrad curriculum. Not because it isn’t rigorous or part of good mathematics instruction, but because it sacrifices time that could be spent on the utility of calculus rather than the intricate proof of theorems that hardly ever again see the light of day in Calculus instruction.
Over-emphasis on the fine points of limits turns Calculus into drudgery, and quite frankly the intuitive definitions are good enough for most applications. Of course, students interested in pure mathematics (and a few of the brighter souls in the applied classes, such as those in engineering and physics) will want to know the details. But these students often do better figuring it out on their own–they don’t really need the teacher to achieve this.
If I were king of the calculus teachers (unfortunately an elected position, and I’ve never been able to muster enough votes), I’d tie high-school AP calculus to the expected senior Physics class and keep the esoterics of continuity/limits to a minimum. Some basic knowledge of these concepts is essential, but really, when are students going to run into a function like Jamaika a jamaikaiaké describes, one that changes its value based on the rationality of the argument? Or another favorite of calculus nerds y=sin(1/x)? If Newton could patiently develop mathematical physics without ever dealing with limit theorems, why can’t students do that today?
Sorry, this is GQ and not GD, but I just couldn’t resist. I’m studying to change my career into education; perhaps the student-focused viewpoint advanced by the program has clouded my judgement
Taking Calc and Physics at the same time really isn’t all that common. Most high school students don’t take calculus at all (or at least until college), and physics is a junior class, at least in Virginia. Unless, of course, you’re talking about AP Physics, which most people even at my Governor’s School don’t take.
True, but that really cripples what you can teach in physics. Even teaching them at the same time is difficult enough: You really need to start with a firm grounding in calculus to make any real progress in physics.
Hey, I took AP Physics junior year, and AP Calc senior year. Granted, I learned some basic calculus in my physics class, but still, it’s not all that uncommon, at least at my high school.
I’m not saying that I disagree with the benefits of eliminating it. What I’m saying is that the collateral damage is the complete inability to properly discuss limits in multivariable calculus.