Why does 0! = 1 ?

That makes more sense than the way it was explained to me:

10[sup]3[/sup] = 1000
10[sup]2[/sup] = 100
10[sup]1[/sup] = 10
10[sup]0[/sup] = 1

Can anyone think of a real world example?

And are there negative factorials? (-n)!

Yes. As someone said above, the number of permutations of n objects is n!. In particular, the number of permutations of 0 objects is 1. Is this real world enough?

No negative integer has a factorial, by any mainstream definition, simply because you’d want (-n)! * (-n + 1) * (-n + 2) * … * -1 * 0 = 0! = 1, which would mean something * 0 = 1. However, the Gamma function does readily give a natural extension of factorial to all complex numbers other than negative integers (specifically, define n! as Gamma(n+1)).

No, the gamma function (the continuous version of the factorial function) blows up at negative integers.

There is a ready “natural” interpretation of complex numbers as “real world” quantities that somehow isn’t generally stressed to the layman: complex numbers correspond to two-dimensional vectors (the idea of the complex plane), which in turn correspond to transformations of 2d space via simultaneous scaling and rotation (specifically, a vector’s polar coordinates express it in terms of scaling and rotation of the unit vector at angle 0). Addition of complex numbers is just addition of vectors; multiplication of complex numbers is just composition of transformations (or can be thought of as a kind of vector multiplication which adds angles and multiplies lengths).

As I understand it, the electrical engineering applications largely come from this latter viewpoint, where a complex number is a combination of magnitude and phase information.

As others have pointed out, there’s no division by zero involved in getting to 0!; you only start dividing by zero when you try to calculate (-1)!.

That is, n! = (n+1)!/(n+1), so we have that 0! = 1!/1 = 1, but (-1)! = 0!/0 = ? = “don’t even try… it’s undefined!”

And for x*0 = 0, “All As are Bs” being true when there are no As, “A implies B” being true when A is false, the empty intersection being the collection of everything, etc.

The general idea is that empty monoidal products are identity elements (well, it’s a little more general [e.g., empty compositions in any category are identity maps], but this seems an appropriate level of abstraction for now).

To put it in more accessible terms, suppose you have some operation that you can apply to finite lists of things to get other things (e.g., sum, product, intersection, composition, concatenation, what have you); we’ll denote it by brackets, so that [1, 2, 7] = 10 when the operation is addition. Suppose also that your operation is “associative”, by which I mean that any two expressions built up from your operation which list the same inputs in the same order give the same result. (For example, with addition, [1, [2, 3]] = 6 = [[1, 2], 3]). In that case, you can carry out the following reasoning:

[A, ] = A = [, A] (since these expressions all have the same inputs in the same order).

Thus, is what is known as an identity element: when “combined” with anything else, it leaves it unchanged. This uniquely specifies which element it must be, since any two identity elements must be the same (as seen by considering the result of “combining” the two of them).

For addition of numbers, of course, the identity element is 0, so the empty sum must be 0. For multiplication of numbers, the identity element is 1, so the empty product is 1. For disjunction of truth values, the identity element is False, so the empty disjunction is False. For concatenation of strings, the identity element is “”, so the empty concatenation is “”. And so forth… Whenever an operation looks associative, it probably has the right structure to be fruitfully thought of as having a 0-ary instance as its identity element, per the above argument.

(Note: The argument I give is similar to Frylock’s, but has the significant difference that it does not depend on ‘cancellability’ of terms from equations (which fails for disjunctions/conjunctions of truth values, unions/intersections of sets, etc.))

He was making an analogy: We defined one thing but not the other.

Oh, on re-reading, I think your interpretation is correct. My apologies.

Eh, you shouldn’t worry about it either; rather than worrying that your argument is too mired in the irrelevant details of little marks of ink on paper, you can think of “multiplication-expressions” as natural, abstract mathematical objects in their own right (e.g., as rooted finite trees whose leaves are labelled with numbers, with every internal node denoting the product of its children), and then your argument remains a “clean” one.

(I know I’ve serial-posted this thread to all hell by now, but just to relate the mandatory anecdote which somehow no one else has posted yet, I first misread the title as “Why does 0 != 1 ?”. That might also have been an interesting conversation, though along very different lines…)

That link explains something I was wondering about, when pressing the factorial button on my calculator, it returned answers for non-integer numbers, e.g. 5.2! was somewhere between 5! and 6!, but I couldn’t work out how it was calculated, based on the definition of factorial I knew.

Not really. Any other examples?

What is your objection to it? What, for you, would be a real-world example of the fact that, e.g., 5! = 120? Of course, all math is abstraction, so this is really just a matter of “What applications is factorial good for? Alright, now observe what the appropriate value of 0! would be for those applications”.

Anyway, here’s another example (though it’s really the same one): put n marbles in a jar, each with a different label. Shake it up, and pull marbles out one by one. Now put them all back in, and do it again. What’s the probability that you’ll get the same result both times? 1/n!. If the jar is empty, what’s the probability that you’ll get the same result both times? 1. Thus, 1/0! = 1; ergo, 0! = 1.

Here’s another contrivedly “real world” example: suppose you have a bunch of cars on separate lanes: Car 0, Car 1, Car 2, Car 3… . They’re electronically hooked up so that, at any moment, Car X + 1’s speed is controlled by Car X’s position; specifically, Car X + 1’s speedometer (in MPH) will match Car X’s odometer (in miles). As for Car 0, it stays at rest forever. All the cars have their odometers set to 0 to start with except for Car 0, whose odometer is set to 1. Then, we let them loose for 1 hour. After that, what happens? Well, the odometer on each Car N will read 1/N! at that point. But what will the odometer on Car 0 read at that point? Obviously, having started at 1 and never moved, it will read 1. Thus, 1/0! = 1; ergo, 0! = 1.

It relies on the analogy of there being 1 set of zero things, which isn’t intuitively more obvious than there being zero sets of zero things. I was hoping there would be some other real world application that would involve a different analogy. But I guess factorials deal with whole numbers and therefore wouldn’t have any real world applications that didn’t boil down to discrete counting.

This boils down to the same thing, but seeing it as a probability is slightly more intuitive to me than seeing it as a number of sets. Thanks.

That’s not a really useful example, because you are defining 0 and 1 to be equivalent in the problem itself. What would happen if car zero were set to 0 or to 25? Would 0! be equal to some other number than 1?

If you set it to something other than one, then the rest of the cars would not end up reflecting the sequence of factorial numbers. The way to get them to do so is to set car zero to 1.

-FrL-

What’s unintuitive about it? If you have a finite set, there’s exactly one subset with zero elements.

For perhaps a clearer example, consider the number of ways to permute a list of length n. For any non-zero n, it’s clearly n!, and there’s only one permutation of a zero-length list, so why not define 0! = 1?

Someone unfamiliar with set theory is not likely to think it intuitive that the empty set is a subset of a non-empty set. (Heck, lots of students take at least a week or so to get past problems with the notion of the empty set itself.) A subset of a set should have some of the set’s elements in it–“intuitively” speaking.

Again, the idea of permuting nothing at all is not intuitive outside set theory.

Once you grasp the definitions, you see how it has to work out this way, but IME, there is often a struggle here when introducing these ideas to others.

-FrL-

I was afraid this objection might be raised. As Frylock said, if you start Car 0’s odometer off at something other than 1, then Car N’s odometer won’t end up reading 1/N!. Specifically, if you start Car 0’s odometer at k, then Car N’s odometer will end up reading k/N!. (So another way to define N! is as the invariant ratio between Car 0’s odometer and Car N’s odometer after the hour is up, no matter where Car 0 is started, still yielding 0! = the ratio between Car 0’s odometer and itself = 1).

Because, as Frylock pointed out, the mathematical definition of set does not behave the same way as the real world common language idea of a set, which is basically ‘a group of objects’. And intuitively, saying that there is “no way to arrange a group of nothing” makes as much, if not more sense than “there is one way arrange a group of nothing”.

This may have to do with that the focus is on the contents of the set in real life, whereas in math the focus is equally on the “container”. I guess if I think about it this way, it does make more sense. It doesn’t seem logically rational to me to talk about the number of ways to arrange nothing, but it does make some sense to me to think about the number of ways to arrange a box full of nothing. I guess that’s the empty set. Is there such thing as a non set? Analogous to both no object and no container? lol

The other reason it’s conceptually difficult is because it’s combining the ideas of continuity (factorials are a series of sorts), and discreteness (they are a limited set of while numbers). Kind of like, if when graphing an equation, in stead of a continuous line, you were only allowed to graph a series of dots spaced apart equally. It’s harder to tell where the next dot will go then where a curve would go.