Why does 0! = 1 ?

I can’t wrap my head around this —we don’t define, say, n/0, but do define 0!— and hope there is a relatively simple, clear reason that a non-mathematician can understand.

This is really two questions. First, my limited understanding of a factorial is that it is the product of all positive integers less than or equal to n. Zero (again through my limited understanding) is neither positive nor negative – hence the first positive integer is 1, so multiplying by zero won’t enter the picture and render the factorial moot.

So then, why is 0! defined as 1? What is the mathematical rationality for that?

Second, I understand the absurdity of i, but have seen instances where formulas require taking the square of a negative number. Is there such an instance where 0! is required in a formula or solution?

Sorry if my question is poorly phrased or grossly obvious. I loves me some math and the various abstractions it entails, but I am wholly lacking in the precise vocabulary and in-depth understanding.

Thanks,

Rhythm

Because it makes other formulas that involve ! work out if we define 0! to be 1.

Because 1 is the “starting point” when you’re talking about multiplication.

Because it preserves the rule that n!*(n+1) = (n+1)!, or that n! / n = (n-1)!
For example, 3! / 3 = 2!, 2! / 2 = 1!, and 1! / 1 = 0!

Not why exactly, but some examples of why it’s good that it does here: Factorial - Wikipedia

Off the top of my head: in the formulas for combinations and permutations, and in the formula for the nth term of a Taylor series.

Near as I can tell, 0! = 1 because mathematicians consider the product of zero numbers to be one. Looks like a lot of math operations work out best if we accept that premise. But why?

From that link, I think I can distill an explanation something like this.

We know that

AB = AB*1

for any A and B. So now, if we “dividing out” the “A*B”'s from the above equation, we get something like

= 1

where by “” I mean to denote no numbers at all.

I’m not entirely satisfied with that explanation, since it relies, I’m afraid (but not sure), on confusing the distinction between operations on numbers and activities concerning written numerals, but if that worry went past you, then I wouldn’t worry about it.

-FrL-

ETA Maybe I should have put the explanation this way instead?

A = A1 <—Identity property
A
= A*1 <—read “A times no other numbers multiplied together equals A times one”
=1 <—read “no numbers multiplied together equals one.”

0! is an instance of an empty product, i.e. the multiplication of no numbers; its numerical value is 1, the reason for that being essentially that the empty sum is taken to be 0 (the wiki article elaborates on that somewhat).

Alternatively, you can turn to the Gamma function as a generalized notion of factorial; since Gamma(n) = (n - 1)! and Gamma(1) = 1, 0! = 1.

Another way of thinking about it: n! is the number of ways of arranging n different things, so 0! would be the number of ways of arranging 0 things. How many ways can you arrange 0 things? Well, it’s 1, because it’s just the empty string: ().

Note: if the n things are the first n positive integers, then:
The 1! way of arranging 1 thing is: (1)
The 2! ways of arranging 2 things are: (1,2) and (2,1)
etc.

Thanks for the insight here – I have a lot of reading to do (there are now several open Wiki pages to be gone through, though I’m not sure how deep into the Taylor series I’ll get, save renting several PotA movies). I think an initial block was how “you can’t divide by zero, so don’t even try… it’s undefined!” was drilled into my head at an early age. Or maybe I’d make a lazy mathematician and I’d use that as a cop-out left and right. “You can’t take A * , that’s undefined!”

Aren’t we all glad I’m not in charge of mathematical progress?

Anyway, thanks for the replies!

I (heart) the Dope

Perhaps a slightly more intuitive way of dealing with the empty product is using exponentiation:

Given that x[sup]m[/sup]/x[sup]n[/sup] = x[sup]m-n[/sup]

and x[sup]m[/sup]/x[sup]m[/sup] = 1,

it follows that 1 = x[sup]m[/sup]/x[sup]m[/sup] = x[sup]m-m[/sup] = x[sup]0[/sup],

and with the interpretation of exponentiation as successive multiplication, x[sup]0[/sup] is the same as multiplying no numbers.

Somebody has to say this.

There’s nothing absurd about i. It’s as “real” and as meaningful and as useful and as absolute as any integer or fraction or irrational number. Integers are just a special case of the more complete complex numbers. They’re the truncated, scrawny version of reality. If anything integers are absurd, not i.

Nah, integers are cool. Sometimes you have to count discrete objects, you know? But rational numbers are freaky because I don’t think they really exist in the real world. They’re like nice neat approximations though. But, depending on your point of view, real numbers in general are quite absurd. Most people don’t think realize it, but if you have a problem with i, you haven’t really thought about real numbers. Threads like these (and the .9999…=1? threads) illustrate that well, I think.

In this context, I’d think “absurd” means not being a sensible model of anything in the real world.

Natural numbers (i.e., the nonnegative integers) are not absurd, because they are a model of counting discrete objects in the real world, e.g., 3 is a good model of a herd of three goats.

Real numbers are a good model of distances in the real world: if you take a unit of length, such as a cubit or metre, then not all distances are an integer multiple of the unit.

Imaginary numbers are harder to find something that they are a model for. They are a useful model of some phenomena in electromagnetism, but early mathematicians such as Pythagoras and Euclid would not have seen a use for imaginary numbers in the real world objects they studied. So, for me, imaginary numbers are not absurd, but integers are far less absurd than they are.

Is to.

It’s an imaginary number. While it may make perfect sense eventually, and have plenty of good uses, it’s still the type of thing Douglas Adams would have written.

:stuck_out_tongue:

Pythagoras and Euclid didn’t see a use for integers in the real world either. But they were wrong.

The only reason people think imaginary numbers are more absurd than integers is because they learned integers in the 3rd grade and imaginary numbers in high school or college. They’ve just had more time to get used to the idea of negative numbers than the square roots of negative numbers. Plus most people have less of a reason to use imaginary numbers in real life than integers. Some of us do use imaginary numbers in real life though and there is really no reason to think they are more absurd than integers.

Math is entertaining to me. Learning about numbers is like watching a really involved soap opera and learning about how crazy all the characters are and their interactions with each other. That’s how I used the term ‘absurd’ in my last post. I think it is a fool’s errand to try and rank numbers in order of absurdity though.

Did you mean to say “a use for real numbers”?

-FrL-

The point may be that the ancient Greeks certainly used the natural numbers, but they may not have developed the negative integers. I suspect that the first real use of negative integers was in bookkeeping and accounting, where they are useful to model liabilities versus assets. I don’t think the ancient Greeks got to that level in bookkeeping: that sort of model was developed in the late middle ages.

I think that this is a good point. What numbers “make sense” changes as our view of the purpose of numbers changes.

When we think of numbers as corresponding to countable collections of objects, only positive integers (i.e., “natural numbers”) make sense.

When we think of numbers as corresponding to countable parts of objects, positive rational numbers make sense.

When we think of numbers as corresponding to distances, positive real numbers make sense.

When we think of numbers as corresponding to assets we hold and debts we owe, negative numbers also make sense.

When we think of numbers as solutions to equations, then imaginary and complex numbers make just as much sense.

But in school we spend a long time thinking of numbers as all those other things before we think of them as solutions to equations, so for many people that last case never feels quite “natural” – even though those equations often represent real things about the world, just as much as the fact that your bank balance is negative may represent an unfortunately real fact about the world.

Is it the same reason x[sup]0[/sup] = 1?

Yes. No xs, all multiplied together, equals one.

-FrL-

Or equivalently, to preserve the relationship x[sup]n[/sup] = x[sup]n + 1[/sup]/x. (Thus, x[sup]0[/sup] = x[sup]1[/sup]/x = x/x = 1)

Whichever explanation for 0! = 1 you prefer, I think you can give an analogous explanation for x[sup]0[/sup] = 1