This has bugged me for quite some time. In algebra (and more advanced) classes, it can be… well, not common, but not rare to have an equation such as y=x^2/x. In many cases, we’re encouraged to cancel and get y=x.
This… bothers me, because it seems to me you’re implicitly changing your function’s domain from (-inf,inf) - {0} to (-inf,inf). Now, this didn’t go completely unmentioned way back in pre-calc, we definitely talked about asymptotes and holes in graphs, but it was always in the context of “when graphing x^2/x… but you should generally just cancel the terms, and then there will be no hole!”
I never understood why you’re allowed to transform (x^n)/(x^m) where m<=n (you don’t get this problem when m>n) to x^(n-m)*; it seems like implicitly changing the domain of your function should be problematic, or at least considered. Why do we say that these two statements are equivalent when you have to invoke a domain change to do it? Is it just that it usually doesn’t matter? Just bad teaching?
Qualification: Obviously x^(n-m) = (x^n)/(x^m) where x is in (-inf,inf)-{0} for both expressions, but the we were instructed to do it without fail implicitly changed the domain of x^(n-m) to also include zero. In other words f(0) shouldn’t have worked for either, but we always treated it as if the x^(n-m) expression magically could handle f(0) now.
What you are talking about is a removable singularity. And yes, a very careful treatment would take note of them and convert “x^2/x” to “x, when x != 0, undefined otherwise”.
A context when it makes sense of remove the removable singularity is when you are manipulating series expansions of a known analytic function. You might come up with some cancellation of that type, and you are basically saying, “the expression only correctly represents the underlying function for x != 0, but we know that the underlying function is analytic, so we can infer the missing point by continuity and fill the hole”.
You probably wouldn’t like the way math is taught in physics or engineering courses. These finer points are interesting mathematically, but when the equations apply to a physical system, it is almost always OK to perform the mathematical manipulations that are verboten when there are singularities or other pathology.
To a mathematician, the pathology is often the fun part.
Also, one of the most important and frequent kinds of manipulations you do with functions in higher math & real-life applications, it seems, is integrating them. Single-point discrepancies like that don’t affect the numeric integrals, so it’s typical to just ignore them or hand-wave them away. For true mathematical purists, of course, that’s just outright fraud!
Another example, where some specific points may or may not matter, comes up in a lot of those Calculus theorems you studies – you know the ones, where they make some statement about an interval, carefully specifying whether the interval is closed (i.e., includes its end-points) or open (i.e., omits its end-points). Frequently, the proofs depend on how that’s specified. Sometimes, a stronger statement can be proved depending on whether you include or exclude the end-points. But when the time comes that you actually have to integrate over that interval, it doesn’t make any difference and nobody gives a hoot.
Have you studied Differential Equations yet? There are other kinds of cases too. A lot of differential equations, when you go to solve them, have a singular “trivial solution” like f(x) = 0 in addition to any other real useful solutions, and they are typically of no interest, often ignored, and often not even mentioned.
ETA: And yes, it’s always kinda bugged me too. I’m one of those “mathematical purists” I guess. I always tried to explicitly note “where x != 0” or other similar restrictions wherever that comes up. Just covering my ASCII, I always felt.
This also comes up all the time in analyzing polynomial functions, where, for purposes of the analysis, the constant term is always treated as c[sub]n[/sub]x[sup]0[/sup]
A polynomial in x is always defined for ALL x, including 0, but that treatment excludes 0 from the domain. The universal resolution to this problem always seems to be: Just ignore that and plunge ahead!
This also seems to be resolved by defining “analytic functions” (already mentioned above by leahcim), which entails some obscure definition that I and the whole class only kinda-sorta-half-understood, but which seemed to boil down to saying “here is the kind of situation where we can carefully define our way around the situation and thus ignore it”.
ETA: Do I get bonus points for spelling leahcim right?
Yes, worrying about these kinds of finer points of math were (almost) completely ignored in my engineering classes. Occasionally there’d be a hanwave about how this technique isn’t quite mathematically rigorous, but worked in our situation. And that’s why I found this SMBC comic hilarious.
Analytic means “has a Taylor Series around every point, and that Taylor series actually converges to the function, at least in a neighbourhood of the point”. It is a theorem that for functions from the complex numbers to the complex numbers, if you know an analytic function on some open set, you can infer its values anywhere else on the complex plane through Analytic Continuation.
It is common to have a Taylor series that converges only in a small region, but mean, instead of strictly the function defined by that Taylor series, to function extended from that region through analytic continuation. In that case, papering over removable singularities from the power series is all copacetic.
(BTW, one of my favorite counterexamples of something is the function f(x) = exp(-1/x^2) if x != 0, 0 if x == 0, which differentiable an infinite number of times at zero, but nevertheless is not analytic because the Taylor series so derived is identically zero).
Do people typically spell it wrong? If they do, I hadn’t noticed. Mostly people just think my RL name is “Leah”.
Do people typically spell it wrong? If they do, I hadn’t noticed. Mostly people just think my RL name is “Leah”.
[/QUOTE]
Are you one of those Dopers who reverse the spelling of a RL name to get their username? Or is it just a coincidence that leahcim is “michael” spelled backwards?
Dangit, leahcim, I was going to post that example! Of note, that discontinuity is removable in the real numbers, since the limit as you approach the origin along the real axis is indeed unambiguously zero. But it’s not a removable discontinuity in the complex numbers, since you can get other limits at the origin by following other paths in the complex plane. It’s yet another demonstration of the “reality” of the complex numbers that this poor behavior in the complex plane (the essential discontinuity) results in a poor behavior in the real numbers (the zero-convergence-radius Taylor series).
Oh, and The Lurker Above, I too find that SMBC comic hilarious. What’s really funny, though, is watching the mathematicians go through the insane convolutions to show that the bastard notations physicists use really do work, and just mean completely different things from what the physicists think they mean.
Oh holy shit, that makes the problem I had in this thread make so much more sense. I had two math teachers in a row who insisted you couldn’t treat dy and dx in dy/dx as separate variables, and then I went into a physics class where they did so without explaining why you could suddenly do that. There’s actually a legitimate disagreement about it. I’m not crazy and stupid, people were actually just telling me two completely different things!
So I’ve been looking up the laws of exponents (a[sup]m[/sup]/a[sup]n[/sup] = a[sup]m-n[/sup] et al) in a few Intermediate/College Algebra textbooks I have handy. Most of them include the condition that a is nonzero, or they say something like “where defined” or “assuming denominators are not zero.”
One of my other favourite cases (which I can’t remember the details off-hand) was a particular solution method of an ODE over the reals that resulted in a power series for the solution, which was guaranteed to converge in a region around zero up to the nearest discontinuity in the coefficient functions. Despite the fact that there were no discontinuities on the real axis, the resulting series nevertheless only converged between -1 and 1, because one of the coefficient functions had a pole at i which was blocking the circle of convergence from expanding further.
So basically you had a solution to an ODE solely in the real numbers being affected by something that happens to one of the functions way out in the complex plane.
Bad teaching. Pre-calc is math. In math the domain matters. There may be applications where it doesn’t matter, but to me, as a math teacher, the hole is the best part. If you don’t explicitly state the original domain along with the transformed function, you’ll lose points.
I will boldly say, even in mathematics, it is very frequently the case that transforming “x^m/x^n” to “x^(m - n)” and allowing interpretation at x = 0 [or removing removable singularities more generally] captures correctly the dynamics of the ambient situation motivating the problem. [If your goal is to solve the equation “x^m y = x^n z” by dividing both sides by “x^n”, you may complain that you reach “y = x^(n - m) z” and fail to acknowledge the solution “x = 0”. But the problem here isn’t that you rewrote “x^n/x^m” into “x^(n - m)”; the problem is that you carried out that division in the first place [or, rather, that you carried out the division without acknowledging the possibility of the denominator being zero].]
One shouldn’t think of “x^m/x^n” is automatically carrying a different domain than “x^(m - n)”. Neither comes with its domain specified [am I talking about integers? Complex numbers? Quaternions? Values modulo 7? Values modulo 8?]. Domain specification is a separate matter (often merely implicit, and very often it is fruitful to eventually realize that the domain in which one’s reasoning makes sense can be generalized further than one originally had in mind).