Is it a fundamental rule that a function applied to a null set equals the identity? With multiplication we have n^0 = 1, but does [null] + [null] = 0?
I’d say the car analogy is significantly different and doesn’t boil down to discrete counting. However, if you want a analogy even further along the “continuous/discrete axis” (as, on preview, I see you might):
Suppose you have a car with the property that, at time t, its speed is e^(z*ln(t) - t). Start it off at time t = 0 and let it run for as long as you like… it will asymptotically approach a certain distance from its origin. How far will that distance be? Turns out, some simple calculus will show, if z is a positive integer, the distance will be z!.
What if z is 0? In that case, some further simple calculus will show, that distance will be 1. Thus, compelling grounds to take 0! = 1.
(Obviously, this is just a very, very thin disguise put on the definition of the Gamma function)
I don’t know what you mean by your first sentence. As for your second, perhaps you shouldn’t conflate sets and numbers; if “[null]” means “a set of numbers that happens to be empty”, then it is not, itself, the sort of thing you would use in an addition to get a numeric result. You can turn it into a number by asking for the sum of all the numbers in that set, the product of all the numbers in that set, etc. Depending on how you turn it into a number, the answer will change.
Yes, it looks like thinking of sets as “boxes” is putting you on the right conceptual track. An empty set is like a box that contains nothing, as you note. It’s also vital to note the distinction between a sandwich, and a box that contains a sandwich (i.e., a set containing one thing isn’t just the same as that thing itself). If you can keep those two points in mind, you’ll avoid the most common mistakes in understanding the mathematical concept of “set”.
(I probably don’t need to point this out, but perhaps instead of just “boxes”, you should think of sets as “possible boxes”; i.e., even though me and my grandmother aren’t actually standing in a box at the moment, there could, potentially, be a box containing just the two of us, so there’s still a set containing just the two of us. There could also potentially be a box containing just me; there could also potentially be a box containing every United States citizen. Thus, those are a couple other sets I am in. As you see, it’s no problem for me to be in many different sets, since sets are just “possible boxes”. [Of course, we’ll want to consider two “possible boxes” to be the same precisely when they contain the same things])
As for “non-sets”, this appears to be your attempt to handle some sort of plural reference; to have a term to refer directly to all of the objects in a given collection, simultaneously, without going through the intermediary of some “possible box”/abstract container, whatever that would mean. We could contrive up some system of formal rules for dealing with such terms, the details of which would depend on exactly what we want to do with them. But this brings us to the question, what is the use of such language? What can we express with it that we cannot just as naturally express with the language of abstract containers?
Am I the only one who read the thread title as “Why does 0 != 1?” My thought was, of course 0 does not equal 1. How could anyone think it does?
This is still the best answer. Yes, you can have the interpretations with permutations, Gamma functions, empty sets, etc. But in order to make the simple recursive formula for n! = n*(n-1)! work for all integers 1, 2, 3, …, you must have 0!=1. Similarly, the reason why a[sup]0[/sup]=1 for all a<>0 is to make the formula a[sup]m[/sup]/a[sup]n[/sup] = a[sup]m-n[/sup] work for all positive integers m and n. So you can say we define that 0!=1 for convenience. If you ever had to code the factorial function you’ll realize how convenient this really is.
Lost in my sea of chain-posting, I noted the same misinterpretation (post #31).
Here’s another analogy for sets, which may or may not be any good: think of mathematical objects as webpages (and let’s take the Web to be redundancy-free: if two pages have the same content, then they actually have the same URL, as far as we’re concerned). Sets are like certain special types of webpages; the contents of such pages are just a collection of links. An empty set is like a webpage which doesn’t contain any links. A one-element set is like a webpage that just contains one link; note that the page and the page it links to are generally not the same thing.
Of course, it’s not a priori obvious what kinds of webpages do and don’t exist. Formal systems of set theory give us all kinds of principles to answer such questions (e.g., in standard mathematics, we have such principles as “For any finite collection of webpages, there is one which links to all and only those” and “It’s not possible to ‘link-chase’ forever; you’ll always eventually up at a page with no links”). But while different systems differ on the principles they adopt, this conceptual framework applies to all of them.
One year for Halloween, I went as “The set containing Chronos”. I made a couple of big curly braces out of wire, and mounted them on either side of myself.
I guess I was noticing that n^0 is essentially a null set multiplication. You don’t multiply n by itself any times. So I was wondering if there might be a more fundamental rule where the null set applied to any function equaled the identity. In the case of addition, the identity is 0. Technically I guess it does work in that multiplication can be considered addition X times and n*0 = 0. It seems then that the number zero leads back to the multiplication identity for powers and the addition identity for multiplication.
Oh. My apologies for not understanding. Yes, you are correct: for any associative binary operation with an identity, there is a canonical way to apply it to a finite list, and, in particular, when applied to the null list, the result will be the identity (a la post #27).
Pedantic technical bits:
If the operation is furthermore commutative, like addition and multiplication, so that the order of its inputs doesn’t matter, then we can speak of applying it to “multisets” (sets that may have repetitions) rather than “lists” or “sequences”. And, if the operation has the property that repeating inputs changes nothing, we could speak of proper “sets” rather than “multisets”. But if it lacks these properties, then, pedantically, such terminology may not be appropriate.
Indeed, if the operation is not associative [like, e.g., subtraction, where 1 - (2 - 3) is different from (1 - 2) - 3], then there is no canonical way to apply it to a finite sequence; one has to distinguish between left- and right-associated application, among others. But if it has a left identity, then we could call this the result of left-associated application to the null sequence, and similarly with “left” replaced with “right”. [Continuing with our example of subtraction, note that it has a right identity of 0 (i.e., x - 0 = x) but no left identity. Thus, the right-associated subtraction of a finite sequence is 0, but we would not speak of its left-associated subtraction]
Of course, the reason we would make this convention is to preserve the property that right-associated application to a sequence starting with x = the binary operation applied to x and (right-associated application to the rest of the sequence).
We do have to be wary, though, of the fact that there might be more than one right identity; the convention is, of course, that much more natural when there is a unique one to go with.
Friend: Hey, I think we should go with matching costumes. What are you thinking of going as? Anything really wacky?
Chronos: Brace yourself.
re. the OP…there is only one way to arrange zero.