Why does the sum of all natural numbers equal -1/12?

It will reach it if you do it for eternity and a day.

While Excel obviously can’t do arithmetic infinitely, it can be used as a tool to gain some insight into how certain simple series or functions behave. If you do a few hundred iterations of this series in Excel, it becomes very clear that things get very close to 2n very quickly. So it is reasonable to conclude that 2n might actually be the limit of this series. Armed with that insight, you can prove that 2n is indeed the limit relatively easily.

Well, if we start numbering the days starting tomorrow with natural numbers, and sum them to eternity (assuming that equals infinity), then the number of days will be -1/12th, no? :wink:

Mercifully, I have to go to a meeting so I can let this go, but the scenario he described involved manual calculations (he used Excel; he spoke of “if I add them up”) and is nowhere near a mathematical proof. So, for the scenario described, I still don’t think you can say you will get exactly 2n.

But that’s not what I said. I said that using finite numerical calculations can provide clues. And in this case (but not all cases) the clue leads you in the right direction. Actually proving the sum of the series requires a tool far more sophisticated than Excel: a pencil.

Well, since we’ve established that shifting things one place to the left or right while summing infinite series is always totally legit and not a steaming pile of bullshit, let’s look at this another way.
1 + 2 + 3 + 4 + 5 + …
equals
1 + 1 + 1 + 1 + 1 + …
plus
0 + 1 + 1 + 1 + 1 + …
plus
0 + 0 + 1 + 1 + 1 + …
plus
0 + 0 + 0 + 1 + 1 + …
ad infinitum.

We can discard a zero and shift each of these one to the left once per step down, because as we established earlier moving infinite series to the left or right is totally legit and not a steaming pile of bullshit.

So,
1 + 2 + 3 + 4 + 5 + …
is therefore equal to an infinite number of
1 + 1 + 1 + 1 + 1 …

Next:
1 + 1 + 1 + 1 + 1 … >= 1 - 1 + 1 - 1 + 1 …

As all components are greater than or equal to the corresponding component.

The OP’s ‘proof’, using a method that is totally legit and not a steaming pile of bullshit, indicates that

1 - 1 + 1 - 1 + 1 … = 1/2

Therefore
1 + 2 + 3 + 4 + 5 + … = infinity times (something that is >= 1/2)

half of infinity is infinite, a number greater than or equal to infinity is infinite, so the sum 1 + 2 + 3 + 4 + 5 + … is infinite.

…I don’t even know where to begin.

The book Things to Make and Do in the Fourth Dimension: A Mathematician’s Journey Through Narcissistic Numbers, Optimal Dating Algorithms, at Least Two Kinds of Infinity, and More by Matt Parker has a discussion of this. Parker works hard to put all sorts of counterintuitive mathematical notions into understandable contexts.

He does a better job on 1+2+3+4+… = -1/12 than other writers I’ve read. And also the Riemann Hypothesis. I recommend it for non-professionals who want a sense of what’s really going on in these discussions despite not understanding the advanced math.

The trouble I have with all this is, how distant from conventional “normal” addition can the process get and still be called “addition”? Is this a “sum” in the sense of the word anything like the normal sense of “sum”?

Compare with that near-infinite thread on the value of 0.999… –
Early on in that thread, we made it clear that addition of infinitely many terms cannot be done in a process anything like normal addition. We had to define exactly what a “sum of infinitely many terms” is and how to compute it, and it isn’t just ordinary addition. It’s the limit of a sequence of sums, each of those being a sum of finitely many terms. Everyone who continued to doubt that 0.999… = 1, in the subsequent 100000000000000 posts, basically was having trouble grokking that.

I haven’t even begun to grok this sum = -1/12 thing, but I haven’t tried yet, or read up on the details. But I assume that this must involve yet another new definition of “the sum of an infinite series” that must be quite different from any usual kind of addition.

Arithmetic is not mathematics, except in the most reductive sense. Arithmetic is a special case of much more general treatments of numbers.

1+2+3+4+… = -1/12 is a special case itself, based on the Bernoulli numbers, which solve the general equation 1[sup]m[/sup] + 2[sup]m[/sup] + 3[sup]m[/sup] + 4 [sup]m[/sup] … + n[sup]m[/sup] = (B+n+1)[sup]m+1[/sup] - B[sup]m+1[/sup]/m+1

B is where the equation jumps out of standard algebra. B[sup]m[/sup] does not mean B to the mth power, but the mth value for B in a special table that Bernoulii calculated. It’s the first use, to my knowledge, of a look-up table in math. The table is really weird itself, with every odd value of B from 3 onward equal to 0. (If you use complex numbers for m, then you get the Riemann zeta function and that’s a gigantic huge deal. Proving that a certain subset of values for the Riemann function is always 0 is the grail of modern mathematics.) From the Bernoulli function, you can manipulate it to get rid of that pesky n term that is infinite and get -B[sup]m+1[/sup]/m+1 and that answer is -1/12. For arithmetic, that answer is indeed meaningless. For mathematics, it’s an insight into a deeper way of looking at infinite series.

And it’s still addition, just using a formula to speed the calculation. No different from adding the first 100 numbers with the equation n(n+1)/2 and getting 5050.

Mathematicians, I didn’t read every entry but I don’t see Bernoulli’s name. I found this in Parker’s book to be a better introduction that the usual approach through Riemann so I’m passing it along, hopefully correctly.

Well, we’ve hardly mentioned Riemann in this thread either. We’ve mostly phrased things at a more general level than that (the Bernoulli numbers amount to a special case of the values of the Riemann zeta function, which itself is yielded as a particular case of the “zeta-summation” of post #82 [illustrated in detail in #45]). And I’m not sure why you find the look-up table so significant (a look-up table is just the writing down of a previous calculation; it’s not like the Bernoulli numbers are defined by an empirically discovered table. And it certainly wasn’t the first time anyone made use of a mathematical table! For example, Napier lived, compiled his tables of logarithms, and died, all before Jakob Bernoulli was born).

But, sure, it can be useful to think directly in terms of the problem which led Bernoulli to his numbers (finding a general formula for sums of fixed powers of finitely many consecutive integers). Let’s do so:

Suppose we knew how to sum up the infinite series H[sub]p/sub = n[sup]p[/sup] + (n + 1)[sup]p[/sup] + (n + 2)[sup]p[/sup] + … from arbitrary starting points n for a given power p. (For example, this series is uncontroversially absolutely convergent for any p < -1). Then this would give a formula for finite sums of consecutive p-th powers as well: the finite sum n[sup]p[/sup] + (n + 1)[sup]p[/sup] + … m[sup]p[/sup] would correspond to the difference H[sub]p/sub - H[sub]p[/sub](m + 1) [amounting to adding up ALL the p-th powers starting from n, then getting rid of the unwanted ones from m + 1 on].

So finding a general formula for finite sums of fixed powers of consecutive integers reduces to understanding the behavior of H[sub]p/sub.

[Note that, also, we would ordinarily expect the subtracted off H[sub]p[/sub](m + 1) term above to diminish to zero as m increased one by one towards infinity starting from n, since this would be simply taking the sum n[sup]p[/sup] + (n + 1)[sup]p[/sup] + … and peeling away each of its terms one by one. So our formula H[sub]p/sub - H[sub]p[/sub](m + 1) should reduce to simply H[sub]p/sub again if we let m go to infinity, just as we would expect for consistency’s sake.]

Great. Let’s further explore what H[sub]p/sub acts like.

Note that the derivative of H[sub]p/sub with respect to n is, by the power law, the sum of p * n[sup]p - 1[/sup] + p * (n + 1)[sup]p - 1[/sup] + … ; that is, it is p * H[sub]p - 1/sub.

Put another way, the function H[sub]p[/sub] is 1/p times an antiderivative of the function H[sub]p - 1[/sub]. Of course, a function has multiple antiderivatives, up to an additive constant. Which antiderivative is H[sub]p[/sub]/p? Well, re-using this same integration rule, we also know that the mean value of H[sub]p[/sub] on the interval [1, 2] is (H[sub]p + 1/sub - H[sub]p + 1/sub)/(p + 1) = -1[sup]p + 1[/sup]/(p + 1) = -1/(p + 1). This pins down the additive constant.

So now we know how to get H[sub]p[/sub] inductively from H[sub]p - 1[/sub].

[And, as noted before, H[sub]p[/sub] is straightforward for p < -1, where we have absolute convergence. And so inductively, we obtain H[sub]p[/sub] for all greater p as well…

…Except there’s a slight hitch here to smooth out before we’ve figured out H[sub]p[/sub] for higher integer p. First, we can’t get H[sub]-1[/sub] from H[sub]-2[/sub] in this way, because the mean value of H[sub]-1[/sub] from 1 to 2 was supposed to be -1/(-1 + 1), which involves division by zero. We’re going to have to abandon the idea that there’s any such thing as H[sub]-1[/sub]. (Not that it would’ve been that helpful in going further; even if we had an H[sub]-1[/sub], we couldn’t get H[sub]0[/sub] from H[sub]-1[/sub] in this way, because H[sub]0[/sub] is supposed to be 1/0 times the appropriate antiderivative of H[sub]-1[/sub], which involves another division by zero.).

That’s ok, though, because…]

There is also a basic intuition for how H[sub]0[/sub] should act: H[sub]0/sub - H[sub]0[/sub](n + k) is supposed to be the sum of n[sup]0[/sup] + (n + 1)[sup]0[/sup] + … [with k many terms] = 1 + 1 + … [with k many terms]. For integer k, this clearly comes out to k itself; if we make the choice to consider this true for arbitrary k, then we will have accumulated the following facts:

A) H[sub]p/sub - H[sub]p[/sub](n + k) = n[sup]p[/sup] + (n + 1)[sup]p[/sup] + …, with k many terms, for integer k
B) H[sub]0/sub - H[sub]0[/sub](n + k) = k, even if k is fractional
C) The derivative of H[sub]p[/sub] is p * H[sub]p - 1[/sub]

These three rules pin down completely what H[sub]p[/sub] is like for natural number p. From B), we see that H[sub]0/sub = H[sub]0/sub - n; thus, H[sub]0[/sub] is a polynomial of degree 1. Combining this with C), we see that for natural numbers p, we must have that H[sub]p[/sub] is a polynomial of degree p + 1. And then by integrating, scaling, and determining additive constants as noted before using C) and A), we find out precisely what these polynomials must be.

As the first few examples, we will have that H[sub]0/sub = 1/2 - n, H[sub]1/sub = -1/12 + n/2 - n[sup]2[/sup]/2, H[sub]2/sub = 0 - n/6 + n[sup]2[/sup]/2 - n[sup]3[/sup]/3, and so on.

And, hey, take a look: our formula for the infinite sum 0[sup]p[/sup] + 1[sup]p[/sup] + 2[sup]p[/sup] + … is as H[sub]p/sub, and we find that H[sub]0/sub = 1/2 [if hadn’t included that 0[sup]0[/sup] term, we’d have H[sub]0/sub = -1/2] and H[sub]1/sub = -1/12.

What does any of this have to do with Bernoulli numbers? What are Bernoulli numbers?

Well, the negated derivative of H[sub]p[/sub] is called the p-th Bernoulli polynomial, and the zeroth-order coefficient of the p-th Bernoulli polynomial is called the p-th Bernoulli number. The Bernoulli numbers are useful because you can determine all the coefficients of the Bernoulli polynomials from them, and the Bernoulli polynomials are useful because they’re just another way of talking about H[sub]p[/sub]: that is, because the integral from n to n + k of the p-th Bernoulli polynomial amounts to the sum of the p-th powers of k many consecutive values starting from n.

But I’ve phrased everything in terms of H[sub]p[/sub] instead because it ties nicely to the generalizations we’ve been interested in so far.

Whoops, one mistake as I moved factors around:

The “1/p” here should have been simply “p”. And that means:

The above was mistaken; there’s no division by zero problem in getting H[sub]0[/sub] from H[sub]-1[/sub], but rather, only in the reverse direction. If we had H[sub]-1[/sub], then we could indeed compute H[sub]0[/sub] as 0 * the appropriate antiderivative of H[sub]-1[/sub]. But H[sub]-1[/sub] blows up too badly, as previously noted (we can’t obtain it from H[sub]-2[/sub] because of the other division by zero problem), and therefore we have to obtain H[sub]0[/sub] by other means (the intuition that it decreases at a constant unit rate).

All I mean by this is that H[sub]p[/sub] directly expresses the sort of infinite summations we are concerned with. Don’t interpret this as carrying any great weight; there’s not actually any major difference between looking at H[sub]p[/sub] (a slight reframing of what’s usually called the Hurwitz zeta function) or looking at the Bernoulli polynomials, or looking at the Bernoulli numbers, or looking at the Riemann zeta function. These things are all minor reparametrizations of each other.

The ‘average’ roll of a fair dice is 3.5, but I challenge anyone to roll that specific score. Do an infinite number of trials, however, and you’ll find this holds true.

The same thing happens in this ‘proof’. When the infinite series 1 - 1 + 1 - 1 + 1… is averaged, the value 1/2 appears (if you do it a certain way), but at no point in the process of summation does that value ever occur. If flips between 0 and 1, infinitely.

Tell somebody with bipolar disorder that they are, on average, quite OK.

Everything you say is true. What of it? Many people also say things such as “1 + 1/2 + 1/4 + 1/8 + … = 2”, even though it must be acknowledged as true that at no point in the ordinary process of summation does the value 2 ever occur. The sum stays below 2, for infinitely long. Yet people still say such things, and consider themselves to have ‘proofs’ of such things, and do so for natural reasons. They are not typically under the delusion that the value 2 occurs at some point in the ordinary process of summation here; rather, they are making natural use of an extraordinary sense of summation. And so it is with all the rest.

^^^ This.

Think of the Schrodinger’s Cat scenario. Before you look, the cat is NOT half-alive and half dead, it is in an indeterminate state. Is it alive (+1) or dead (0)? Once you look, the probability wave collapses, and you have either a dead cat or a live one. Since S1 = 1 - 1 + 1 - 1 + … never ends, it is like never looking into the box where the cat is either dead or alive, but never half of either.

Another thing about S1 is that the terms can be grouped to give different answers:

(1 - 1) + (1 - 1) - … = 0
1 - (1 - 1) - (1 - 1) - … = 1
1 - (1 - 1 + 1 - 1 + …) = 1 - S1 = 1 or 0

Of course, if we write

S1 = 1 - 1 + 1 - 1 + …
S1 = 1 - 1 + 1 - …

2S1 = 1, S1 = 1/2

We have three separate answers for the value of this series; clearly, S1 does not converge.

Best post / username combo evah!

1/2 is the Cesàro sum for Grandi’s Series (1 - 1 + 1 - 1 + …).

What’s really important about it is the understanding that everything in math depends on absolutely strict and precise definitions. Otherwise it’s too easy to wind up with results that are not meaningful.

The Cesàro sum can be ridiculed, but that only reflects on a limited understanding of mathematics.