A distinction that someone once offered to me was to consider having a bank account with no money in it. In this case it is proper to say that the balance of the account is 0. If I don’t have a bank account at all, then the balance would be nothing.
Was thinking about this a bit more on an unproductive afternoon…
Obviously, it’s correct in the limit. But the engineer in me wants faster convergence. To give a concrete example, let’s look at n=3 and d=100. So we have:
100100100 ~ 101102103
That’s 6% off; not great. But it suggests a closer approximation; instead of 100[sup]3[/sup], we take 102[sup]3[/sup] (or rather, (d + (n+1)/2)[sup]n[/sup]), since we expect the center number to be closer to the nth root. 102[sup]3[/sup]. That’s only 0.01% off; a nice improvement! Can we go further?
Let’s look again–we’re trying to approximate:
(d+1)(d+2)(d+3)…
If we did this an integer number of times, we’d have a polynomial that looked like:
d[sup]n[/sup] + Ad[sup]n-1[/sup] + Bd[sup]n-2[/sup] …
For n=4, that comes to:
d[sup]4[/sup] + 10d[sup]3[/sup] + 35d[sup]2[/sup] …
A is a triangular number (the sum of 1+2+3+4+…), and so is equal to n(n+1)/2. B is slightly trickier, and equal to 21 + 32 + 31 + 43 + 42 + 42 + 41 + … I did not recognize this sequence immediately but a quick visit to the OEIS lists it as a Stirling number of the first kind, with this formula (though with n offset by 1):
1/24(n+1)n(n+2)(3n+5)
Well, this is looking pretty good already. I can approximate pi! to 0.0008% error with d=100. Without the extra terms, the error is 6.6%. Of course, I’m making the assumption that “non-integer-order polynomials” (is there such a thing?) behave the same way as normal ones, but in practice it seems to work.
Obviously this stuff is somewhat irrelevant in terms of the math, but if you want to write a calculator program it makes a huge difference in accuracy. It would not be difficult to add additional terms if necessary.
To spin off I side question from my previous post:
Is there any theory or whatever behind “non-integer polynomials”? Probably shouldn’t even call them polynomials, but that seems like the best analogy. Specifically, I’m talking about cases where I want to generalize some pattern like:
(x+1)[sup]n[/sup]
Or:
(x+1)(x+2)(x+3)…
For non-integer n. We expect the coefficients to be something like:
Ax[sup]n[/sup] + Bx[sup]n-1[/sup] + Cx[sup]n-2[/sup] + …
I don’t know what happens “at the end”, but maybe I don’t care–if x is large and the coefficients don’t grow too rapidly, then the importance of the terms dwindles away to nothing (or, maybe x is small and I can start and the end and work backwards). I can just look at some finite number of terms where things look sensible.
Going back to this:
(x+1)[sup]n[/sup]
…the terms are obviously the binomial coefficients. The first two terms are easy, being:
x[sup]n[/sup] + nx[sup]n-1[/sup]
For the third term we need the binomial coefficient, which is traditionally defined for the integers since it represents permutations:
[n k] = n!/(k!(n-k)!)
It almost looks like we need to employ our generalized factorial, but we don’t; for n=3.14 and k=2, for instance, we have 3.14!/(2!(3.14-2)!) = 3.142.141.14!/(2!1.14!) = 3.14*2.14/2! = 3.3598. So that’s easy enough.
At any rate, it seems that there are probably many possible patterns that we can generalize into non-integer powers, or even complex powers. Is there a name I can look up? Any other interesting applications?
In considering (x + 1)[sup]n[/sup] for non-integer n, you are stumbling upon the generalized binomial theorem (the accomplishment for which Newton first received acclaim). (x + 1)[sup]n[/sup] is equal to the sum of (n choose i) * x[sup]i[/sup] as i ranges over the natural numbers [with absolute convergence for |x| < 1, or everywhere if the series is finite because n is a natural number] where (n choose i) is defined for natural number i but arbitrary n as (n * (n - 1) * (n - 2) …)/(1 * 2 * 3 * …) with i many factors in both numerator and denominator.
Note that we don’t actually need fractional powers of x here; we’re expanding from the other end, so to speak. You can expand from the x end instead by taking this to be x[sup]n[/sup] * (1 + 1/x)[sup]n[/sup], and then expanding (1 + 1/x)[sup]n[/sup] as above.
You can do a similar thing for (x + 1)(x + 2)(x + 3)…(x + n), and indeed, thinking about this was part of how Euler came to his account of fractional factorials in the first place. I will write more on that later.
[I owe you a number of posts, at this point. It’s been a busy week (or, rather, I’ve been lazier than I should be, making my obligations busier than I’ve been).]
No rush. I greatly appreciate what you’ve written so far, and I absorb a bit more every time I read them.
The generalized binomial theorem indeed looks like like what I stumbled upon (I always enjoy rediscovering things, even if I am a few hundred years late).
I was thinking a bit about why the sequence ends for integers but non-integers keep going. And it seems the answer is in the factorial; the (n-k)! term in the denominator crosses over to the negative integers once k>n. Negative integer factorials are infinite, so in a hand-waving way we can just say that the terms keep going but they have coefficients of 0.
But that gets me thinking further: is the only reason we don’t go backwards also because of the negative factorials? Suppose we’re talking about (x+1)[sup]3[/sup]. We can “speculate” the existence of a x[sup]4[/sup] term, but the coefficient would be (3 4), or 0. Likewise, perhaps there is a x[sup]-1[/sup] term, but its coefficient is (3 -1); also 0.
The reason I ask is because maybe we can start in the middle, and then generate terms on both sides. Say, start with (3 1.5)x[sup]1.5[/sup] and extend out in both directions: … + ax[sup]-0.5[/sup] + bx[sup]0.5[/sup] + cx[sup]1.5[/sup] + dx[sup]2.5[/sup] + …
Ok, that doesn’t converge for any non-zero x. But hey, convergence is overrated. Maybe we can say that all of these variants–whether we start at an integer or otherwise–have the same value. Then, we can subtract off the convergent half and say that the rest of it is equal to the difference. Hmm…
Anyway, I’m just letting my mind wander. Perhaps none of this will make sense in the morning. I look forward to your upcoming posts…
Hm, interesting, I’ve never thought about it that way.
I will note one warning, though, for a small special case: what would you like us to take (-1 choose -1) to be?
There are several choices motivated by several patterns, but they, unfortunately, irritatingly, actually begin to clash:
One pattern is that (n choose -1) is always 0, so that (-1 choose -1) = 0, which would allow us to look at the generalized binomial theorem for (1 + x)[sup]-1[/sup] = 1 + x + x[sup]2[/sup] + x[sup]3[/sup] + … properly in this way, since we do not want coefficients for x[sup]-1[/sup], etc.
But another pattern people sometimes have in mind is that (n choose n) is always 1, so that (-1 choose -1) = 1, which would then be a problem for thinking of the lack of backwards extension as because the backwards coefficients go to zero.
Other patterns people think of are that (n choose a) = (n choose (n - a)) AND that (n choose a) + (n choose (a + 1)) = ((n + 1) choose (a + 1)), which would lead us to supposing (-1 choose -1) = (-1 choose 0), and that their sum is (0 choose 0) = 1, so that (-1 choose -1) = 1/2.
Alas, there’s no easy way to preserve all the patterns we want. This difficulty only comes up with interpreting (n i) for negative integer n and integer i, but it IS there for those. Anyway, clearly, if we’re to think of the backwards coefficients as still given by binomial coefficients but all zeros, the pattern we’re going to want to keep is that (n i) is zero for negative integers i, and damnation to the other patterns.
There is sometimes value in thinking of bidirectionally infinite power series that just about never converge as nonetheless having meaning; e.g., the sum of x[sup]i[/sup] over all integers i can often be thought of as a kind of Dirac delta function, zero almost everywhere but blowing up infinitely at x = 1 to have total integral 1.
I haven’t thought about it for this specific scenario of a series of binomial coefficients in both directions, which could be interesting. Let me muse on that.
That sounds almost like the physics concept of renormalization, but in a purely mathematical context (as opposed to a physical context to which the mathematicians grudgingly admitted mathematics could be fit to).
You mean renormalization like 1+1+1+… = -1/2 and other methods of summing divergent series? [ETA sometimes a formal power series is just that and there is no need to try to make it converge in a numerical sense.]
As for binomial coefficients, does it help to consider them as a function of two complex variables?
I should rather say, if you do consider the function x choose y, then there is no limit as x and y tend to -1, nor are familiar identities like (n choose a) = (n choose n-a) always true.
Now this guy proposes, for n a negative integer and a an integer,
(n choose a) = (-1)[sup]a[/sup](-n+a-1 choose a) if a ≥ 0,
(-1)[sup]n-a[/sup](-a-1 choose n-a) if a ≤ n,
0 otherwise.
Now that I think about it (and Indistinguishable said it, and the guy basically says as much), this trick does indeed seem to be related to the expansion of δ(1-x) = ∑[sub]a[/sub]x[sup]a[/sup] = ∑[sub]a≥0[/sub]x[sup]a[/sup] + ∑[sub]a>0[/sub]x[sup]-a[/sup], where the first series converges to (1-x)[sup]-1[/sup] when |x|<1, and the second series to 1/(x-1) when |x|>1, and taking derivatives on both sides.
My idea was based on the expansion of (x+1)[sup]n[/sup] working out the same backwards and forwards, despite the terms not overlapping. For example, we can have:
(x+1)[sup]1.5[/sup]
= 1 + 1.5x + .375x[sup]2[/sup] - 0.0625x[sup]3[/sup] + 0.0234375x[sup]4[/sup] + …
= x[sup]1.5[/sup] + 1.5x[sup]0.5[/sup] + .375x[sup]-0.5[/sup] - 0.0625x[sup]-1.5[/sup] + 0.0234375x[sup]-2.5[/sup] + …
And indeed, the two formulas do seem to be essentially the same. At the least, because the coefficients decrease rapidly, they both converge when x is near 1. And observationally, they appear to be the same function.
I thought that if we can do this two ways, then why not more? That is, start with (say) x[sup]0.25[/sup] and work backwards and forwards. I end up with something like:
… + 0.0136669x[sup]-2.75[/sup] - 0.0331909x[sup]-1.75[/sup] + 0.143827x[sup]-0.75[/sup] + 1.29445x^[sup]0.25[/sup] + 1.29445x[sup]1.25[/sup] + 0.143827x[sup]2.25[/sup] - 0.0331909x[sup]3.25[/sup] + 0.0136669x[sup]4.25[/sup] + …
And it actually seems to work. Don’t get me wrong; this is just me playing around with a graphing calculator at this point. I certainly haven’t proven anything. But it does hint at things.
There is one obvious potential result–it implies that this sum is the same for all r (n fixed):
∑[sub]k ∈ Z[/sub] (n choose (k+r))
A little program seems to confirm the result for a handful of numbers.
First of all, about Kronenburg’s controversial coefficients, if
(1-x)[sup]-1[/sup] = ∑[sub]k≥0[/sub]x[sup]k[/sup] as a usual geometric series, then differentiating gives, for n a negative integer,
(1-x)[sup]-n[/sup] = ∑[sub]k≥0[/sub](k+n-1 choose n-1)x[sup]k[/sup] .
On the other hand, (1-x)[sup]n[/sup] = ∑[sub]k≥0/sub[sup]k[/sup](n choose k)x[sup]k[/sup] for any n. He clearly wants to “renormalize” by allowing n to be a non-negative integer in the first formula; his definition keeps identities such as (x choose y) = (x choose x-y) true for all complex x, y. Is this desirable?
Question for Dr. Strangelove: if |x| < 1, then (1+x)[sup]n[/sup] = ∑[sub]k≥0[/sub](n choose k)x[sup]k[/sup] converges. Not if |x| > 1. The opposite is true for ∑[sub]k≥0[/sub](n choose k)x[sup]n-k[/sup] . So how were you making the sum over all integers converge?
I’m not–the only one that I’m (fairly certain) converges is this:
∑[sub]k ∈ Z[/sub] (n choose (k+r))
I’m just substituting x=1 into the various expansions and assuming they should all give the same value regardless of the starting point. (n choose (k+r)) quickly approaches 0 as you go to large |k|.
Observationally, there seemed to be a small radius of convergence around x=1, so that even on the divergent side it extends a little past. But upon thinking on it more I suspect that’s wrong. Intuitively, the k! and (n-k)! terms in the denominator probably cancel to a low power of k. So the x[sup]k[/sup] dominates in the limit and it diverges.
No matter. I think I’d be satisfied if I could prove that all the expressions have the same Taylor expansion at x=1.
As we already noted, a simple ratio test shows that sum A converges when |x|<1, and diverges when |x|>1. The opposite is true for sum B.
What happens when x=1? I claim that your series does not always converge. For example, let r=0, and consider ∑[sub]k≥0[/sub](n choose k). The binomial coefficient (n choose k) will have magnitude like k[sup]-n-1[/sup], so if (the real part of) n is greater than -1 then the sum (an alternating sum) will converge, but for instance (-3/2 choose k) grows without bound.
Newton’s binomial theorem gives ∑[sub]k≥0[/sub](n choose k) x[sup]k[/sup] = (1+x)ⁿ and ∑[sub]k<0[/sub](n choose k) x[sup]k[/sup] = 0, so at x=1 you should have 2ⁿ + O(x-1). Anyway, when x=1 and n is not too negative, so that A and B converge, in general (when r≠0) B ≠ 0 but A+B=2ⁿ still. Does that sound right?
Why is it so necessary to state that 0/0 is undefined?
I mean, centuries ago square root of -1 was mathematical nonsense. Could n/0 be defined as a new set of numbers in similar way a+bi is? It might be a bit of a self-fulfilling prophecy, but it’s a start. Or is it a childish idea?
Scroll back to the first page of the thread: it is undefined because there is no sensible way to define it (0 does not have a multiplicative inverse). So don’t divide by zero
[on the other hand, it is perfectly sensible to extend the positive numbers to all integers, to extend the integers to all rational numbers, to extend real numbers to form the complex numbers, etc. Those are different situations.]
Complex numbers lead to all sorts of useful, interesting, and logical results. Defining 0/0 does not. Either you say that 0/0 can be any number at all, which is exactly what “undetermined” means, or you break the definition of division as the inverse of multiplication.
Now, defining any other number divided by 0 as infinity can be useful and interesting in some contexts, and so it is done in those contexts. It’s a trade-off, though: While that gives you some interesting properties, it sacrifices some other useful properties that you used to have, and so it’s not useful to define it that way in all contexts.
This would be a typical conversation with someone who doesn’t get it:
“So, 0 is the identity for addition. 1 is the identity for multiplication.”
“Got it, no problem.”
“The sum of no numbers is 0.”
“Easy, peasy.”
“The product of no numbers is 1.”
“Hold right there buster. ‘No numbers’ means zero like before!”
“No, this is another use of ‘no numbers’.”
“You guys clearly have no idea what you’re talking about.”
“Grrrrr.”
I ran into problems like this teaching Computer Science. People would confuse the empty set, empty string, etc. I’d bring in an empty coffee cup and an empty bag. I’d ask if these are the same type of thing? They’d agree not. That the word “empty” in front doesn’t change the type of thing it is. Good.
Then they’d promptly confuse the empty set and the empty string.
It is so weird. They can’t possibly confuse a green car with a green ball, right?
I know you (ftg) know all this; I’m commenting to the crowd.
Zero is the identity for addition. It’s also something outside the set of counting numbers (at least for the most common and IMO useful definition of “counting numbers”).
When you use zero in the sense of 3 + 0 => 3 you’re using the additive identity aspect of zero. When you say you have zero of something, e.g. zero coffee cups, you’ve jumped outside the additive identity concept into something else not of the set of counting numbers. The word remains “zero”; the idea is something much different.
The word “empty” has effectively the same problem. In one interpretation it’s a state of an object. In another interpretation its the absence of not only that object, but of any object.
Most folks have never thought about this stuff this deeply, so of course they foul it up badly once they’re forced to confront their fuzzy mental habits. Most of a CS education IMO is learning to identify and root out fuzziness.
in pseudo-C code
Class Example : {
Class Foo : {...}
Class Bar : {...}
void Test() {
Foo foo = null;
Bar bar = null;
Display(RuntimeTypeOf(foo));
Display(RuntimeTypeOf(bar));
Display(RuntimeTypeOf(foo) == RuntimeTypeOf(bar));
}
}
When we execute Test() what’s the result of the three Display calls? Trigger exceptions, report the declared types & FALSE, report the generic empty / null type and report TRUE, or something else?
All us programmers (should!!) know what our various actual languages will do. But the point is that there can be a legit debate about what a hypothetical new language *ought *to do in this case.
It’s also true that the empty string isn’t equivalent to null, except in certain brain-damaged languages from the very early days. Conflating those two has launched more bad thinking than I can shake a stick at. The C idiom that “if (<integer expression>)” evaluates as if it was “if (0=<integer expression>)” is another “helpful” shortcut that enables a lot of lazy thinking later. Categories matter.
Perhaps this might help: If we add nothing together, and then add some numbers, what we have is just the sum of those numbers, right? Because we added nothing earlier. That’s why “the sum of zero numbers” is the additive identity, 0. But now if we multiply nothing together, and then multiply some numbers, what we have is just the product of those numbers, so the “multiplying nothing together” must have given us the multiplicative identity, 1.