Zero is a natural number

Almost forgot: welcome! (though I’m sure Dex or someone will be around shortly to say that)

Just for the record, if we do not define 0[sup]0[/sup] = 1, you can’t use the binomial theorem to expand (a + 0)[sup]1[/sup].

There are plenty of cases where the general rule x[sup]0[/sup]=1 works for x=0, just as there are plenty of cases where the general rule that x/x=1 does. But, in general, if you allow x[sup]0[/sup] into your work, you can prove that 1=2.

Can you please give an example of such a proof?

Well, it depends what rules of arithmetic you’re willing to give up. A good example is “I want to make x[sup]a[/sup]x[sup]b[/sup]=x[sup]a+b[/sup] for all x, a ,b.”

If so, then 0=0.0=0[sup]1[/sup].0[sup]-1[/sup]=0[sup]0[/sup]=1. So 0=1, add 1 and get 1=2.

I prefer to say

0[sup]0[/sup] =
exp(0 ln(0)) =
exp(-0*\inf) =
exp(- 0/0)

And thus it’s undefined.

Alternately, look to see what happens to x[sup]y[/sup] as (x,y)->(0,0).

Wait, that’s not even a rule.

Your second step points that out. 0 is not 0[sup]-1[/sup].

Besides, I’m pretty sure that John meant allowing 0[sup]0[/sup], not x[sup]0[/sup], since x[sup]0[/sup] is defined for all x except x=0.

Actually it is a rule, but his application, involving rasing to zero to a negative power, involves yet another forbidden operation. Zero to a negative power is equivalent to dividing one by zero. Zero to the zero is equivalent to dividing zero by zero, and any textbook example of the paradoxes that arrive with you divide zero by zero will apply equally well.

That’s an interesting one, because if you approach the origin along any straight line (other than the line x=0), you’ll find a limit of 1. In fact, I’m pretty sure that if you follow any polynomial path to the origin, you still get one. Nonetheless, there still exist paths to the origin which will give you any limit you like, between 0 and 1.

This shows the reason why epssilon-delta proofs are essential to a “real” understanding of limits in multiple variables. Unfortunately, many (if not most) schools end up taking a shortcut.

Basically, the standard is to be able to prove a limit exists with a certain value by building it up from constant, sum, product, and quotient limit rules; or to prove it doesn’t by finding two (usually polynomial) paths with different limits. We do teach examples where all straight lines agree, but a parabolic path doesn’t, for instance. This is a good example of how horribly wrong things can (and in fact “almost always” do) go.

Look at what happens to it in the left half-plane. Why worry about a discontinuity at the origin?

I don’t think any of these present reasons for believing that 0[sup]0[/sup] is undefined. First, we only ask that x[sup]a[/sup]x[sup]b[/sup]=x[sup]a+b[/sup] for all x, a, b such that both sides of the equation are defined. This prevents the step of Shade’s “proof” where we pass from 0[sup]1[/sup].0[sup]-1[/sup] to 0[sup]0[/sup] (not to mention the step that RM Mentock pointed out where we passed from 0 to 0[sup]-1[/sup]).

The same problem applies to Mathochist’s “proof”. We can only use the identity “x[sup]y[/sup] = exp(y ln(x)) for all x, y” when both sides of the equation are defined. Since ln(x) isn’t defined when x=0, this equation can’t be used to argue against (or for) the definedness of 0[sup]0[/sup].

Finally, the observation about limits just means that we give up hope that f(x,y) = x[sup]y[/sup] is continuous at the origin. That doesn’t mean that it has to be undefined at the origin.

See the sci.math FAQ entry on 0[sup]0[/sup] for more discussion.

Well, I think the germ of x[sup]y[/sup] at the origin is an interesting pathology, more so than that along the branch-cut.

The first “proof” was more an idea for why something must go wrong, and that it’s intimately tied to 0/0.

The second does prove that it must be undefined, or rather that no definition suffices, because for any number in C there is a path in C[sup]2[/sup] such that x(t)[sup]y(t)[/sup] converges to that number as (x,y)->(0,0) along it.

For my next trick:
[ul]
[li]Rotando and Korn’s rationale is unconvincing to be because outside of complex analysis, almost all interesting structures are less than analytic. C[sup]infinity[/sup] manifolds are nice; analytic ones are boring. The fact that only one side of the limit is being taken shows that the composed function f(x)[sup]g(x)[/sup] is far from analytic itself.[/li][li]Graham, Knuth, and Patashnik are arguing from the perspective of combinatorialists and computer scientists. Yes, in combinatorics 0[sup]0[/sup]=1 makes most formulae work out more elegantly. This doesn’t mean that it should be defined as such without specific commentary.[/li][li]Kahan just rehashes R&K.[/li][li]The next “rationale” is an argument by authority. “Consensus has been built around 0[sup]0[/sup]=1. Take our word for it.”[/li][li]Libri recounts both arguments again.[/li][/ul]
So, your link basically only advances two arguments, neither of which are very convincing outside their specific fields. The upshot is that 0[sup]0[/sup] may well be taken to have the value 1, but only with specific mention of the fact and the reason why this is being done.

Also, none of this really responds to what touched off this branch: my refutation that “differential calculus was developed … just to deal with forms like 0/0”. I think if anything it shows that forms like 0/0 haven’t been “dealt with”. In certain situations there’s an obvious value to use to ignore them, but in general it gets hairy.

What do you mean by “no definition suffices”? True, no definition suffices to make x[sup]y[/sup] continuous, but why throw the baby out with the bath water? Just because we can’t assign a value which makes the function continuous doesn’t mean we can’t assign any value at all. It certainly doesn’t mean that any contradictions result from assigning the value 0[sup]0[/sup] = 1 (as evidently claimed by you and Shade.)

Functions are defined by convenience. Having a function be continuous is a great convenience, but once there is no hope of that, we can turn to other considerations when deciding how to define the function. In some cases, assigning any value at a particular point leads to contradiction (as with 0/0), but no one has shown that that is the case with 0[sup]0[/sup]. You have already conceded that 0[sup]0[/sup] = 1 is convenient in some fields. Unless you know of some other assignment that confers some convenience in another field, why do you resist that definition?

I never claimed that any contradiction would arise, but you must concede that no contradiction will arise by choosing any other value. Mind you, the function is not merely discontinuous at zero. The germ of x[sup]y[/sup] at the origin is extremely pathological.

Because it’s highly non-canonical. If you want to use it as a rule, you must state it and explain why you’re using it in this situation. This is the way mathematics works: I don’t have to show another area where a different definition would be more elegant (though I’m sure that I could construct a pathological example); I just have to show that an alternate definition would not lead to contradiction. If it’s not naturally universal, don’t say it’s a universal feature.

Actually, what I most advocate is leaving it specifically undefined. That is: something which seems to evaluate to 0[sup]0[/sup] must be considered carefully to see what its “real” value is. Not everything in the form 0[sup]0[/sup] is equal to 1, as your earlier link clearly admits (C[sup]infinity[/sup] is not a good enough hypothesis for the analytic argument).

Just in case, I’ll reiterate again that I’m not saying not to use 0[sup]0[/sup]=1, but that when you use it you should make specific note of the fact and explain why it’s justified.

Almost forgot: x[sup]y[/sup] is not defined “by convenience”. It is defined as

x[sup]y[/sup] = exp(y log(x))

where exp and log are defined as the algebraic exponential for the field C and its inverse on a suitable Riemann surface, respectively. This definition is the closest to an analytic function extending the definition by iterated multiplications on (Z[sup]+[/sup])[sup]2[/sup]. That function has no continuation to the origin. This is similar to the case of the extension of the factorial function on N to C, and its behavior at the negative integers. The character of the germ is far worse, of course.