Is there anyone out there that can provide the mathematical proof that the for the above statement? Or is the case, as my professor says, that this is true because it’s the definition, and that’s that?

If so- FEH. I hate arbitrary numbers made up to balance equations…

<grumble, grumble> I’ll bump this one Monday morning when all the mathematicians are back. Have a great weekend!

Hmmm… IIRC, defining 0!=1 is useful for series solutions. Anyway, I’m looking in my “Mathematical Physics” book, and I have here that the factorial function is defined as (with the G printed as a capital gamma in my book):

s! = G(s+1) = int(p^2 * exp(-p) * dp, 0, inf)

The text says the Gamma function can be used to define factorials of negative and noninteger numbers. The section of the book then goes about demonstrating the Method of Steepest Descent to solve.

So I guess my response to the OP is that 0! isn’t arbitrarily defined to be equal to 1 and all other whole number factorials by a formula, but rather 0!=1 because that is the result of the defined factorial function over a range of all real numbers.

You might want to look up the gamma function, which is the continuous analog of the factorial function. I believe that it is defined everywhere except at the negative integers. I would tell you more, but I can’t quite remember it right now.

As for the factorial function: 0! is equal to 1 because n! = n * (n-1)! for n > 0. If 0! were equal to anything else, then the entire sequence of factorials would be thrown off. In particular, n! would not be equal to P(i, i=1 to n), where P denotes the product operator. This is the reason for the definition. The fact that 0! = 1! is not particularly significant.

This shouldn’t bother you–all definitions are arbitrary. I took a class in predicate logic last spring where we discussed whether a particular sentence was true for all possible meanings of the “greater than” symbol.

Of course, there may be better reasons–anyone who wishes to correct this is more than welcome.

Besides the reasons already given, there’s the “choose” function: let C(n,r) be the number of ways to choose r objects from a set of n identical objects. For example, C(3,2)=3, since there are three ways to choose two objects from a set of three (the first and the second, the first and the third, or the second and the third). Now whenever 1 <= r <= n-1, the equation C(n,r) = n! / ((n-r)!r!) holds. But we can also define C(n,r) when r=0 or r=n: C(n,0)=C(n,n)=1, since there’s only one way to choose no things or n things from a set of n things. And if we define 0! to be 1, then the formula for C(n,r) still works.

And then there’s the Taylor series. The Taylor series for the function e[sup]x[/sup], for example, is 1 + x + x[sup]2[/sup]/2! + x[sup]3[/sup]/3! + … . Every term after the first has the form x[sup]n[/sup]/n! for some n; if we define 0! to be equal to 1, then the first term has that form too.

Neither of these are proofs…your professor is correct in that 0!=1 by definition.
It’s just that the definition makes an awful lot of sense.