1+2+3+4+....infinity = -1/12?

Sure, and also, rather than writing “0.9 + 0.09 + 0.009 + … = 1”, it would be clearer to some to write “limit (9/10 + 9/100 + … + 9/10^n) = 1 where the limit is taken as n goes through 1, 2, 3, 4, …”.

But while the substantial meaning is the same regardless, there is sometimes some useful perspective brought to bear by using the framing “0.9 + 0.09 + 0.009 + … = 1”; its connotations can help draw the mind towards fruitful analogies. And it can be similarly so with the examples of this thread.

If you’re in a context where you are inclined to think of series summation as meaning absolutely convergent summation by default, then you may wish to explicitly mark asymptotics of initial segment partial sums. If you’re in a context where that’s your default notion of summation, then you may wish to explicitly mark Abel summation. And if you are exploring informally, for which there is certainly a substantial role in mathematics, you may wish not to prematurely set rigid boundaries by explicitly marking any such thing.

Which is most appropriate at any given moment depends on what you are doing at that moment. Everything is context-dependent, always, unavoidably. And there is no god-given baseline context.

The proof shown in

is much more convincing than the OP proof. Good classical analysis that at least one sum to infinity is -1/12.

More interesting is what other infinite sums there are. Do we have a strictly limited set of such sums? Or is that infinite also?

So if one student had one answer, and another student had the other, you would mark both correct?

Generally I would try to word questions to remove any ambiguity. However, were I to fail in this regard, any answer that demonstrated mastery of the concept that I was testing would get full points.

One of the sources I found while looking into the -1/12 issue said that the sum of 1^2 + 2^2 + 3^2 + 4^2 + … can be calculated to equal 0.

Sure. The same “zeta summation” technique noted in this thread gives a finite value to the sum of 1[sup]p[/sup] + 2[sup]p[/sup] + 3[sup]p[/sup] + … for any power p other than -1. And that value will always be what in post #79 was called -B(p + 1, 1)/(p + 1). This will be zero for any positive even p.

This example is the sum of natural numbers. Is there an equivalent result for the sum of Integers (natural numbers plus zero plus negative whole numbers) ?

Sure. Any of the summation methods discussed in this thread so far could naturally be extended to take the summation of f(i) over all integers i to be f(0) + the sum of f(i) over all positive integers i + the sum of f(-i) over all positive integers i.

This will yield that the “zeta-summation” of the series whose n-th element is n, over all integers n, is zero.

For that matter, we would also get zero taking the sum of the series whose n-th element is 1, over all integers n. (The fact that summing this series over only the positive integers yields -1/2 can be viewed as precisely what is necessary to get the sum over the positive integers + the equal sum over the negative integers + the value at index 0 to come out to zero).

I don’t think it’s been linked to yet in this thread, but there is a followup Numberphile video, “Why -1/12 is a gold nugget,” which is worth watching for anyone who is still bothered by this.

Credit where credit is due: I found the first two Numberphile videos on this topic bothersome, but this third one is excellent. (Credit mainly to Edward Frenkel, I suppose).

Funny. I just watched this one before you responded and thought, hey, this kind of sounds like how I’d imagine Indistinguishable to answer this (given your posts in this thread.) Fascinating subject.

O.K. How about the scaled series?

SUM( 3, 6, 9, 12 …)

Intuition says it will be -3/12 but intuition is rarely right when infinity is involved. I’ll guess -1/12 again?

The rules we’ve been discussing for most of this thread DO say that, if a series sums to a particular value, and you scale the series up by a constant factor, the value it sums to also scales up by that same factor.

So, in the same sense that 1 + 2 + 3 + 4 + … = -1/12, we have that 3 + 6 + 9 + 12 + … = -3/12.

That having been said, we need to be a little bit careful: the rules we’ve been discussing for most of this thread DON’T say that shifting the starting index of a series always keeps its sum unchanged.

So, importantly, when we say “1 + 2 + 3 + 4 + … = -1/12”, we mean “The series whose i-th term is i, for positive integer i, has the sum -1/12 (by the methods of this thread)”. If we were interested instead in, say, the series whose i-th term is (i + 1), for integer i >= 0, then this would sum (by the methods of this thread) to 5/12, even though the terms of this series also simply run through the positive integers in order. [And further re-numberings of the starting index would cause further changes to the sum]

Similarly, the series whose i-th term is 3i, for positive integer i, sums to -3/12 = -1/4, but the series whose i-th term is 3(i + 1), for integer i > = 0, sums to 5/4. The choice of starting index matters.

Interesting. I’ve been playing around with this, and I see the integral from -1 to 0 of 3/2 (x(1+x)+1) gives the answer 5/4, as well. I got this by extrapolating a relationship I saw on another website, that shows if you plot x(1+x)/2 (which is essentially the formula for 1, 1+2, 1+2+3, 1+2+3+4, etc…) you get a graph with a little dip that goes under the x axis from -1 to 0. If you then cross out the equal parts of the positive part of the graph from the negative part of the graph, you’re left with that little section from -1 to 0. If you take the integral of this (to find the area) from -1 to 0, you end up with -1/12. In the same way, if we take this 3(i+1) and knock out the parts of positive and negative infinity that correspond to each other, we’re left with that section from -1 to 0. If we integrate that to find the area we get, 5/4. There is some sort of relationship here, right?

There is a relationship, but I’m not sure about the way you’re expressing it. Where does 3/2 (x(1+x)+1) come from? If that’s meant to be the sum of 3(i + 1) as i goes from 1 to x, it is incorrect; that sum would come out to 3/2 (x(1 + x)) + 3x instead.

[Also, why would it be significant to think of integrating x(1 + x)/2 over the region where it is negative, ignoring the two regions where it is positive? There may be good reason to think of things this way; I just don’t follow that motivation right now]

The observation I would make right now is this: as I began to note in post #79, we have that the sum of i[sup]p[/sup] over the range [a, b) is the integral of B(p, x) as x goes through that range. [Incidentally, for natural number p, B(p, x) is what’s called the p-th Bernoulli polynomial of x]. In general, the x-derivative of B(p, x) is p * B(p - 1, x) [by the analogous property for the derivative of x[sup]p[/sup]], and so, conversely, the x-integral of B(p, x) is B(p + 1, x)/(p + 1). This tells us that the sum of i[sup]p[/sup] over the range [a, b) is (B(p + 1, b) - B(p + 1, a))/(p + 1). For p < -1, we have that B(p + 1, b) goes to 0 as b goes to infinity, telling us that the sum of i[sup]p[/sup] over positive integers p is -B(p + 1, 1)/(p + 1), and indeed, this formula holds for any p other than -1 so long as we use our “zeta summation” [even for p = -1, this formula basically correctly tells us that we should still expect the harmonic series to blow up even with “zeta summation”].

What’s the relationship of this to what you said? Well, you started with the formula for the sum of p-th powers from 1 to x; that is, the sum of p-th powers over the range[1, x + 1); that is, [B(p + 1, x + 1) - B(p + 1, 1)]/(p + 1). Then you integrated this as x went from -1 to 0. This amounts to (the integral of B(p + 1, x) as x goes from 0 to 1) plus -B(p + 1, 1)/(p + 1). The first term here is the same as the sum of x[sup]p + 1[/sup] over the range [0, 1), and thus this formula becomes 0[sup]p + 1[/sup] - B(p + 1, 1)/(p + 1). So long as p > -1, the first term here cancels out, and we’re left with the - B(p + 1, 1)/(p + 1) we are supposed to get.

The above demonstration works for the sum of i[sup]p[/sup] over positive integers i, and therefore, by linearity of all involved operations, for any polynomial P(i) summed over positive integers i; in each such case, there will be some unique polynomial Q such that Q(x) = P(1) + P(2) + … + P(x), and integrating this Q from -1 to 0 will give the zeta-summation of P(i) over all positive integers i.

But that you happened to get 5/4 by integrating 3/2 (x(1+x)+1) from -1 to 0 seems to me a coincidence right now, since I don’t see where 3/2 (x(1+x)+1) came from in the first place, nor where in this calculation you would’ve been taking account of the fact that we are interested in the summation of 3(i + 1) for i >= 0 rather than for i >= 1. [There may be some principled connection or reasoning that I’m just not seeing yet.]

I suppose the appropriate calculation along these lines would be to note that the sum of 3(i + 1) from i = 0 through x is 3/2 (x(1 + x) + 2(x + 1)), and the integral of this from -1 to 0 is 5/4.

You lucked out in that 2(x + 1) and 1 have the same integral from -1 to 0, cancelling out your mistaken(*) formula 3/2 (x(1 + x) + 1).

(*): I think; again, I may be misunderstanding what you were doing, in which case, my apologies.

No, you must be correct. I’m certainly not going to bet on myself here. :slight_smile: 3/2 (x(1+x) + 2(x+1)) is what it should be. It was quite late, so I’m not sure how I arrived at my equation. I’m guessing it must have been dumb luck, or it’s a train of logic I simply don’t remember. Regardless, using the correct derivation formula, we do get 5/4 for that integral, and you answered my question in the fourth paragraph, so thanks. :slight_smile: So, I guess, my question is, is this sort of valid way of visualizing this type of summation?

So, I guess, my question is, is this sort of a valid way of visualizing this type of summation? That is, once we derive the equation for the sum of n terms in our divergent series and plot it, the parts where the negative and corresponding positive divergent infinities don’t correspond, and the area under them is equal to the summation? (My math never got past Calc III, so I wouldn’t be surprised if this way of thinking about it has no relation to how real mathematicians think about this. I guess this way of thinking would only work in limited situations.)

It has nothing to do with negative and corresponding positive divergent infinities (after all, x(x + 1)/2 is positive both below -1 and above 0; there’s no negative infinity around, as such). It just has to do roundabout effects of going from -1 to 0.

To see this, let’s start by supposing you have some integrand g which you wish to think of as having a finite integral T over the range from some starting point Start all the way to positive infinity. If we define f(i) as the integral of g from i to i + 1, then this means T = f(Start) + f(Start + 1) + f(Start + 2) + … .

We could just as well break T down into the integral of g from Start to any intermediate point, plus the integral of g from that intermediate point to infinity; that is, writing g[a, b) to mean the integral of g from a to b, we have that T = g[Start, ∞) = g[Start, x) + g[x, ∞) for any x.

Note that g[x, ∞) = f(x) + f(x + 1) + f(x + 2) + …, ad infinitum. And note that g[Start, Start + n) = f(Start) + f(Start + 1) + …, with n many terms, for any natural number n. [Keep in mind, as the notation indicates, that the summation yielding g[x, ∞) does include f(x), but the summation yielding g[Start, Start + n) does not include f(Start + n)]

Thus, we have a basic relationship between a function giving the sum of g over finite ranges and a function giving the sum of g over infinite ranges: that g[x, ∞) = -g[Start, x) + g[Start, ∞).

That is, once you know the behavior of g[Start, x), you know almost everything about the behavior of g[x, ∞) as well; all that’s missing is knowledge of the additive constant g[Start, ∞).

And how might you determine the value of that constant?

Well, consider now the question of integrating g[x, ∞) = f(x) + f(x + 1) + f(x + 2) + … . If f has an antiderivative F such that the series F(x) + F(x + 1) + F(x + 2) + … is also to be thought of as having a well-defined sum, then this series would be the integral of g[x, ∞). In particular, the integral of g[x, ∞) as x goes from 0 to 1 will be [F(1) + F(2) + F(3) + …] - [F(0) + F(1) + F(2) + …] = -F(0). [There’s nothing special about 0 here; it’s just often a convenient value to look at]

Accordingly, taking the average value of g[Start, ∞) = g[Start, x) + g[x, ∞) as x goes from 0 to 1, we find that g[Start, ∞) = (the average value of g[Start, x) as x goes from 0 to 1) - F(0). Plugging this back in, we find that g[x, ∞) = -g[Start, x) + (the average value of g[Start, x) as x goes from 0 to 1) - F(0).

In particular, looking at x = Start, we see that g[Start, ∞) will be (the average value of g[Start, x) as x goes from 0 to 1) - F(0). [The -g[Start, Start) term disappears, as this is an empty summation].

This is the relationship you were noting before: if f(i) is i, then we can take F(i) to be i[sup]2[/sup]/2, so that F(0) = 0 and is thus ignorable, and we can also take g[1, x) to be h(x - 1) where h(x) = x(x + 1)/2. Then the average value of g[1, x) from 0 to 1 will be the integral of h from -1 to 0, and this will be the value g[1, ∞) as well.

Note that, to make use of this approach, we have to do three things once we’ve pinned down the series we’re interested in calculating as f(Start) + f(Start + 1) + f(Start + 2) + … for: we have to determine the general values of f on arbitrary inputs [not just inputs of the form Start + n for natural number n], we then have to find a g whose integral from i to i + 1 is f(i) [even though such g are only determined up to addition of the derivative of any function of period 1], and we have to determine F(0) where F is an antiderivative of f [even though antiderivatives are only defined up to an additive constant].

The trick that makes this workable, despite the apparent non-uniqueness of f, g, and F, is that we actually have some further appropriateness conditions: the g we pick has to be one we can regard as having a well-defined finite integral from Start to ∞, and similarly, the F we pick has to be one for which we can regard F(x) + F(x + 1) + F(x + 2) + … as having a well-defined finite value for general x. These conditions will suitably constrain us to the correct answer for the cases of interest.

In particular, if f can be taken to be a polynomial, there will also be unique polynomials g and F that do the trick for it in the context of our “zeta summation” [F will be the antiderivative taking value 0 at input 0, while g is obtained by suitable use of Bernoulli polynomials]; this should make it straightforward to calculate the zeta summation of any polynomial series, though the concept makes sense even for more general series than those.

Suppose we work from the opposite direction:

Does -1-2-3-4…=+1/12?

Suppose we look at this problem as part of the Complex Plane:

and describe a Ray…with and endpoint a the Origin 0 and extending out to infinity.

As we sweep the ray around, describing i+2i+3i+4i…, -i-2i-3i-4i…, in fact ALL “a+bi”. would the origin have a perfect circle of r=1/12?

Or perhaps some irregular shape like a fractal?