Why does 0x0 not equal 0/0?

I give different kinds of explanations at different times, but I’ve been going with the pattern explanation for multiplications in this thread, so I’ll go with a pattern explanation for factorials as well. (Thudlow Boink already covered this, but, nonetheless, I’m going to do it again, because I’ve become fond of writing out these tables for some reason…)



—————————————————————————
| 1 | 2 | 3 | 4 | 5 | 6 |
—————————————————————————
| 1 | 2 | 6 | 24|120|720|
—————————————————————————


Here we have a factorial table. The top row’s pattern here is obvious. As for the bottom row, look at this pattern: going to the right always multiplies you by the cell diagonally to your up-right (for example, in going from 6 to 24, we multiply the 6 by the 4 to its up-right). And going to the left always divides you by the cell right above you (for example, in going from 24 to 6, we divide the 24 by the 4 above it).

Suppose you wanted to extend this table to the left preserving this pattern. What should go to the left of the bottom-left 1? Well, our pattern says, going to the left means dividing by the cell above you. So we should get 1/1 = 1. We get this table:



—————————————————————————————
| 0 | 1 | 2 | 3 | 4 | 5 | 6 |
—————————————————————————————
| 1 | 1 | 2 | 6 | 24|120|720|
—————————————————————————————


And so, you see, we have 0! = 1, if we’re to continue these nice patterns.

(Circling back to this thread, going to the left once more, we would get (-1)! = 1/0 = ∞. And, indeed, we’ll have ∞ as the value of n! for each negative integer n. Yes, it’s very reasonable to think of x/0 for nonzero x as ∞ in many circumstances; this is the concept of projectively extended numbers (or affinely extended numbers in some circumstances where it’s useful to distinguish positive and negative infinities; anyway, I won’t get into all that just now).)

Ah, I knew Thudlow Boink had already beaten me to the punch, but now Whack-A-Mole too! Anyway, it’s a good explanation. I forgot all about the fact that we could do color-coding here.

This makes sense to me. Also the table extension. Too bad that when we get to negative numbers the robots’ heads explode.

robots?

When you ask the robot “what number do you multiply by 0 to get 1” the robot’s head asplodes.

It’s only a problem for negative integers. But it turns out that there’s also a way to define factorials for non-integers that makes sense, and that turns out to work for both positive and negative numbers. For instance, (1/2)! = pi^2/6, and (-1/2)! = pi^2/3 (so (-1/2)! * 1/2 = (1/2)! ).

For anyone who’s interested, there’s a formula for a function called the Gamma function that arises naturally in some contexts in calculus, and someone noticed that this function has the property that Gamma(n) = Gamma(n-1)*n for all n, just like factorial does, and that if you take the Gamma function of an integer, you always get the same thing as a factorial. Further, it’s been proven that Gamma is the only smooth function for which this is true (where “smooth” is defined in an appropriate way). So it’s natural to say that it actually is the factorial of a non-integer.

I want you to multiply all the pieces that make up 2.3! :smiley:

There’s nothing special about the shift over by one in the Gamma function; there’s nothing special about it even in the integral that’s sometimes given for it. It somewhat annoys me that everyone keeps talking about it as though that arbitrary historical quirk shift by one is what’s key to extending the definition of factorial to fractions. (Bernoulli, Euler, etc., didn’t even start with that shift; it was introduced later for dumb reasons)

Anyway, for what it’s worth, here’s how the factorial function at values more general than whole numbers is defined:

Suppose you wanted to extend the factorial function to arbitrary arguments. How might you do it?

Well, of course, there are a million ways to do it. (Where “a million” = “infinitely many”). You could say the factorial function is the normal thing at natural numbers, and sqrt(7) everywhere else. This wouldn’t be a very useful extension, but it technically qualifies.

What would make a more useful extension, then? Well, we want an extension that preserves the key properties of the factorial function. For example, that n!/(n−1)! = n.

This still isn’t enough to pin down an extension, though. There’s still infinitely many extensions of that sort.

But there’s another interesting property of the factorial function: n!/(n−r)! is the number of ways to pick a sequence of r items from n choices, with no repetition. This is similar to, albeit less than, the number of ways to do it if you allow repetition, n[sup]r[/sup]. And as n gets larger and larger, the probability of repetition gets negligible; we find that the ratio between n!/(n−r)! and n[sup]r[/sup] approaches 1 as n grows large while r is held fixed. (For that matter, the same thing happens to the ratio between n−r and n).

In other words, if the difference between a and b is held fixed while their individual values grow large, then the ratio between a!/b! and b[sup]a−b[/sup] approaches 1.

This is a very useful property. If we continue to demand this for our extension, on top of the basic 0! = 1 and n!/(n−1)! = n, we will pin down a unique function, like so:

n! = n!/(n+d)! × (n+d)!/d! × d!

If d is a natural number, then n!/(n+d)! and d! are easy to calculate as rising products; combining these two factors produces 1/(n+1) × 2/(n+2) × … × d/(n+d).

Furthermore, our newest demand is that the middle factor, (n+d)!/d!, become replaceable with d[sup]n[/sup] as d grows large.

Thus, we have that n! is the limit, as the natural number d grows large, of d[sup]n[/sup] × 1/(n+1) × 2/(n+2) × … × d/(n+d). And this definition makes sense for all kinds of n, not just natural numbers. (When n is a negative integer, there will be a division by zero, but for all other complex numbers, this limit will be well-defined)

This defines the usual extension of the factorial function to arbitrary inputs. As we demonstrated, this is the unique way to do so while satisfying our key properties. As it turns out, other definitions accomplish the same effect (and therefore are equivalent to this one); for example, mathematicians will often note that one can define n! via “analytic continuation of the integral from x = 0 to infinity of x[sup]n[/sup]/e[sup]x[/sup] dx”. But there’s no need to introduce the general factorial via this complicated definition when the above simple definition is available instead.

One last note (reiterating what I said up top): mathematicians will also often talk about the so-called “Gamma (Γ) function”. The Gamma function is just this extension of the factorial function, shifted over by one. The shifting over by one is of no importance at all. It’s just a stupid historical convention. So don’t worry about it. All that actually matters is the argument above, constructing and establishing the uniqueness of a suitable interpretation of factorial for general (non-integer) inputs.

This works for fractions, complex numbers, even for matrices, all sorts of things. Its reciprocal converges to a finite value everywhere, even at negative integers (where it becomes zero); thus, we might also say the factorial is well-defined as reciprocal zero (infinity in the projectively extended numbers) at negative integers (and in similar ways for matrices or linear operators with negative integer eigenvalues…), and is well-defined as a finite quantity everywhere else.

Which is precisely why I refrained from mentioning it in my previous post. Well, that, and I can never remember which way the shift goes. But I think I was careful enough about my phrasing to make my post true even with the shift.

Oh, wow, you’re right! Sorry; I read your post too quickly, and saw what I’m accustomed to seeing instead of what was actually there.

Of course, you said “Gamma(n) = Gamma(n-1)*n for all n, just like factorial does” instead of “Gamma(n) = Gamma(n - 1) * (n - 1), just like factorial shifted over does”. I was caught off guard! [But note that the recurrence in the form you gave it isn’t true of the standard shifted Gamma; it’s only true of a Gamma made to match the factorial on-the-nose]

I retract any carping about Chronos’s post (who is generally insightful and on-the-mark about these things anyway), but some may still find it useful or interesting to see the definition of the generalized factorial given elementarily, as above.

There is the mnemonic that the usual Gamma function has a pole at 0. That should pin down the direction of the shift-by-1.

Alas, on more careful reading, I have now noticed some minor new carping I must make about Chronos’s post: it gave wrong values for (1/2)! and (-1/2)!. The actual values are (1/2)! = sqrt(pi)/2, and (-1/2)! = sqrt(pi).

[Perhaps you were thinking of Zeta(2) = pi^2/6?]

Is Zeta(2) something about sums of reciprocals of something? That might be what I was thinking of.

Yes zeta(x) = sum 1 to inf n[sup]-x[/sup]

Yup, and so, in particular, Zeta(2) = 1/1^2 + 1/2^2 + 1/3^2 + 1/4^2 + …, which turns out to be pi^2/6, which was first deduced by Euler (much more rigorously than people often give him credit for), solving the so-called “Basel problem”. You can see the way this solution works here.

Damn good thing too. It’s the only way we’ll be able to remain in charge of Earth.

Or said another way, ! is a complex operation made of multiplications and the multiplicative identity over the integers (or reals) is 1.

A very valuable property of arithmetics is to have the degenerate case(s) of complex operation(s) collapse to the relevant identity. That keeps the set closed over the operation. In non-technical terms it avoids kinks and gaps at the expense of maybe creating what looks like, but isn’t really, an exception. Or as **indistinguishable **has said several times here, it preserves the pattern.

The fact k/0 and 0/0 don’t do that is equivalent to integers (and reals) not being closed over division.

Uh, guys. I think we’ve long-since lost our OP here.

Happens most times in math threads. Oh well.

While there are obviously some very highly educated people posting here, I think something much more to the OP’s point (and understanding) has been entirely overlooked by everyone who replied.

That is, that numbers are like “words” in an entirely symbolic language, used to REPRESENT ideas.

In this case, zero is not a THING.  When you have a zero, you have a REPRESENTATION  of nothing.  

The OP seemed to think from the beginning that if he could see a number written on the page in front of him, that therefore there was SOMETHING there. It was a zero.

The confusion wasn’t so much between English and Math, as it was between what is, or is not, and the way we talk about what is, or is not.

My simile: if our OP saw the name Lisa, written on the page, and was asked “is Lisa where you can see her?” he might have said “yes,” because he was confusing the WORD SYMBOL for Lisa, with Lisa the person.

It’s understanding the difference between having ZERO, and having A ZERO.

This is likely to lead down a rabbit hole of semantic quibbling, but I don’t agree that zero is NOTHING, nor that a “0” is a representation of NOTHING. Zero is a NUMBER. It can be used like other numbers. There are 3 pencils on my desk. There are 0 elephants on my desk. You can add it and multiply it and take its square root. It behaves in all ways like a number. It’s not nothing.