why was I not told 1/0 = infinity

well then how come in algebra I have been taught that when you have x/x they cancell out and equal 1. for example if I had the equation

xpq/xyz then I could use cancelation to have it =pq/yz

You were taught that that only applies when x is not 0. If you believe you were taught anything else, you weren’t paying attention.

That’s true for all x, except x=0.

Or were taught badly.

It’s been my experience that many beginning calculus students often misunderstand the meaning of a limit. Some seem to think we’re actually defining 0/0, or infinity/infinity, or at least defining those expressions in particular cases. In fact, we’re not.

Take, for example, the function f(x) = (x[sup]2[/sup] - 1) / (x - 1). This function is undefined when x = 1, and no discussion of limits is going to change that fact. If we “plug in” x =1, we get 0/0 (a meaningless expression), which we call an indeterminate form, and all that means is that we’ve got more work to do–no conclusion has been reached at all. Ultimately, we can determine that the limit as x -> 1 of f(x) is 2. But it’s important to remember that we haven’t defined 0/0 = 2, additionaly, neither have we said, “Well, in this particular case, 0/0 = 2.” 0/0 is still undefined, f(1) is still undefined; the limit notation simply means that, as x gets arbitrarily close to 1, f(x) gets arbitrarily close to 2. No more than that. We can, if we like, continuously extend our function f by defining f(1) = 2, but that, in fact, is a separate matter. Even in that case, the expression (x[sup]2[/sup] - 1) / (x - 1) remains undefined when x = 1.

thanks everybody especially ultrafilter for helping me out with all of this. I am going to talk to my math teacher on monday and get some clarifacation on the matter.

P.S. I found this on google http://documents.wolfram.com/v5/TheMathematicaBook/AdvancedMathematicsInMathematica/Numbers/3.1.8.html

And…

Oddly enough, my Windows calculator says, “Error: Positive Infinity” for 1/0. For -1/0, it says, “Error: Negative Infinity.”

By the way, suppose that we’ve agreed that 0/0 = 1. Then 2/3 = 2/3 * 1 = 2/3 * 0/0 = (2 * 0)/(3 * 0) = 0/0 = 1. See why that’s a problem?

I think it should be pointed out that the IEEE standard for floating-point numbers includes a few things that aren’t really numbers: NaN (not a number), Infinity, -Infinity, and the redundant -0. The fact that they include these doesn’t mean they consider them actual numbers or anything; they’re included to give some idea as to what caused the error.

Yeah, but… why not just state that division by zero is impossible – or, that it’s undefined? Why the strange “Positive Infinity” error? I can’t think of a more direct way of conveying the message that division by zero is undefined than saying that such an operation is “undefined.” Or even something as simple as “Cannot divide by zero” would suffice. Why the weird “infinity” message? I get the feeling that the OP is Bill Gates.

Incidentally, my version of the Windows calculator yields the following:

1/-0 = 1

Huh???

You have it completly backwards.
It would not equal infinity, it would equal NOTHING. NOt zero which is a number but NOTHING. There is not an infinite amount of numbers that would make the statement true, nor would an infinite amount of numbers make it true. There simply is no possible result from the equation.

I’ve also had problems with the idea of 0/0 being undefined. I guess it depends which way you view the problem. If 0/anything is 0, why should 0/0 be any different? On the other hand if anything/0 is undefined, why should 0 be any different? I always felt that 0/0 should be 0, but that may be because I see the numerator first and stop. In the end, I accept it as one of those exceptions that prove the rule.

I can see potential reasons for it; say, someone inputs distance travelled and time taken, and the program has to calculate speed. If a variable can hold “+oo” or “-oo” it could be parsed by an output function as “a positive speed too large to be measured” or “a negative speed too large to be measured”, since we can assume a zero time is a positive time too small to measure.

Of course, a lot of the time, you just see any error and fail, but sometimes an infinity is useful to propagate through your program.

But that’s just not the case. 0/(anything except 0) is 0. If you were taught 0/anything is 0 in math, I wonder if you were taught “I before E” in English. :slight_smile:

Actually, it’s more like “except after C.”

I know. That was my whole point. :slight_smile:

[If 0/anything is 0, why should 0/0 be any different? On the other hand if anything/0 is undefined, why should 0 be any different?

This too has been tackled here before. Although 0/0 is generally indeterminate, in many mathematical contexts it makes a reasonable convention to define 0/0=1, as it removes a discontinuity in various useful functions. See the sci.math FAQ on 0/0 for more on this.

You mean 0^0, right, not 0/0?

Achenar is right. That link talks about 0^0. That’s a different case because there you actually do find that f(x)^g(x) -> 1 as x-> some limit where f(x) = 0 and g(x) = 0 and both functions are analytic. [Read “->” as “goes to” here.]

However, a similar statement does not hold for 0/0…The limit of f(x)/g(x) [as x → some limit where f(x) = 0 and g(x) = 0 and both functions are analytic] can be anything. For example, if f(x) = x and g(x) = x, the limit is 1. If f(x) = 2x and g(x) = x, the limit is 2. If f(x) = x^2 and g(x) = x, the limit is 0. If f(x) = x and g(x) = x^2, the limit is infinity (actually, either + or - infinity depending on the direction from which you take the limit). So, unlike the case of 0^0, there is no obvious value to choose for 0/0 even as a convention.

To be fair though, this fact has been missed by some pretty smart people when the context gets more complicated. Supposedly, Einstein once made an error in a calculation by performing division without considering the alternate possibility that the expression he was dividing by could be 0.

This sort of quasi-math is a result of deep philosophical problems with Calculus. Well, not problems as far as getting a consistent system is concerned- problems as far as the underlying math making any sense.

A few posts above people were asking about why x/x = 1. They had the impression this was true for all x, when it’s only strictly true for x=!0. The problem is probably rooted in the Calc concept of infintesimals. That is to say, Calc deals with problems of continuity by assuming that there exist numbers (which the call differentials) that “approach zero” or “get smaller without bound.” You know these boys as “dx” and “dy”.

The whole key to Calculus (other than recognizing that differentiation and integration are the same thing in reverse) is treating differentials as > 0 for purposes of division, but =0 for purposes of addition.

The confusion about x/x=1 comes directly out of this. If you’ve ever differentiated by hand, then you know that you make liberal use of dx/dx=1 (or maybe it was called “h” nowadays… I forget) when you’re reducing the equation. When you simplify the equation, say to “x + dx” maybe, the teacher will tell you just to drop the dx because it’s zero. Strictly speaking, it ISN’T zero, it’s just an infintesmial. You can see how this could be confusing.

There’s a sort of muddy argument Lebniez and Newton used to explain this. Newton went on about “Mouns” and other philosophical nonsense, too, but that’s neither here nor there. The notion is that these infintestimals represent a change in the value of x, and they can be as small as you want - but they will never be zero. So you are free to divide by them. But how can you then just drop them when you’re adding? How come dx acts like zero in addition?

Because there’s always two ways to represent a number in a decimal system. Every real number can be written two ways. Some examples: 1 = 0.999… 1.33 = 1.329999… or 2345 = 2344.99999… See the pattern? I won’t prove this is true here, someone else can do that (the proof is simple). The point is that, conceptually, there is a “value” in between nothing and something in math. What I’m saying, VERY ROUGHLY, is that 1 - 0.999… = dx.

So the confusion about x/x=1 ends up cutting to the very root of what I think is the most facinating and deep subject in mathematics…

Any comments from people who know more about this?

-C