Why does the sum of all natural numbers equal -1/12?

I’m getting in way over my head here (and I should point out I read Physics at a rather well known university*), but there’s a process called ‘renormalisation’ that gets used all the time in quantum field theory, which is very well established, and I have a feeling that that is based on doing some very mathematically dubious things to divergent series. And our current understanding of nearly everything the fundamental particles of nature do, the ‘Standard Model’, is based on QFT.

Eleven years ago*

**And got a Third in it.

Bad news: in physics it is often useful to re-write a problem as an infinite series as tackling each term in the series is often easier than tackling the problem itself, particularly when no exact solution to the problem is known. This the idea behind perturbation theory and it pops up everywhere: fluid dynamics, general relativity and in quantum physics. Quantum field theory can often produce divergent series which need to be given finite values in order for the equations they appear in to be useful and it’s exactly this kind of method that is used

So yes it is used in physics and not only that this is used in theoretical physics, it’s used in theoretical physics that makes some of the most accurate and precise predictions in physics.

You are restating one of the results that gets drummed into most physics and maths students at university quite early on - that if you want to add up a finite series it doesn’t matter what order you do it in, but if you want to add up an infinite (and only conditionally convergent) series, it absolutely does. I think there’s even a theorem that states you can come up with absolutely any number you like if you rearrange appropriately the terms in a conditionally convergent infinite series.

Now how you react to this is very much a matter of psychology. You can be one of the people who declares that boffins have their heads in the clouds and haven’t spent enough time “studying at the University of Real Life”, and congratulating yourself for being able to see the elementary mistakes that make everything they do a waste of time.

OR, you can be awed and delighted at the discovery of a whole new world that you didn’t even know existed, and resolve to explore it further by asking intelligent questions and reading further and experimenting with new ideas yourself and sharing them with smart people.

This is correct.

While kaltkalt is missing some mathematical concepts in understanding what is happening with the summation, there is actually a different issue with the expected value calculation (this is not a summation).

Adding up two of the same series should be the same, so

1-2+3-4+… should be the same series as 1-2+3-4+…

now what is being added is:

1+(-2+1)+(3-2)+(-4+3)+…(n-(n-1)). Problem is for the series to be the same, the last ‘n’ is missing. For you to be adding two of the same series together you would end up with: 1+(-2+1)+(3-2)+(-4+3)+…(n-(n-1))+n.

Alternative the last figure could be -n+(n-1) and -n respectively. This creates an expected value of 1/2+/-n (there are actually 4 values and not 2, 0-n, 0+n, 1-n, and 1+n)… If you get the average of them, you get (0+2-2n+2n)/4 which equals 1/2.

One thing to note. This is NOT a summation, it is the expected value of the series. An expected value assumes an end (or a finite series). To show how ludicrous this proof actual is:

The sum of a series of natural numbers (between 1 and n) is n(n-1)/2. The sum of all natural numbers is therefore inf(inf-1)/2. If the expected value was actually the sum, then inf(inf-1)=-1/6 or infinity != infinity.

Read about Cesàro Summability.

Infinite series are tricky. There are a lot of different ways to define the sum of a series. If you are not careful you get really stupid answers. If you are really careful, you get interesting stuff. Naive application of finite arithmetic is a bad idea.

I took the course which covered this material (complex analysis), but I retained very little of it. Roger Penrose gives a brief summary for the layperson in few chapters of a book of his, “The Road to Reality”, which is reasonably clear. I’m going to provide the example from his book, the sum

1 + 4 + 16 + 64 + 256 …

because I don’t have a solid enough grasp of the material to consider the sum in question

1 + 2 + 3 + 4 + 5 + …

Let’s consider several sums.

S1 = 1 + 1/4 + 1/16 + 1/64 + 1/256 + …
S2 = 1 + 1 + 1 + 1 + 1 + 1 + …
S3 = 1 + 4 + 16 + 64 + 256 + …

Most of you would say that S1 converges to 4/3, while S2 and S3 are divergent. These are all in fact special cases of the sum

S(x) = 1 + x^2 + x^4 + x^6 + …

where

in S1, x = 1/2
in S2, x = 1
in S3, x = 2.

Instead of as an infinite sum, we can write S as a simple expression. (Penrose gives this as an exercise to the reader. I’ll solve it here.)

Consider the summation T(n)

T(n) = 1 + n^1 + n^2 + n^3 + …

Multiply by n to give

T(n)*n = n^1 + n^2 + n^3 + n^4 + …

Subtract T*x from T to give

T(n) - T(n)*n = 1

Factor and rearrange

T(n) = 1/(1-n)

Now make the substitution n –> x^2

1 + (x^2)^1 + (x^4)^2 + (x^6)^3 + … = 1/(1-x^2)

1 + x^2 + x^4 + x^6 + … = 1/(1-x^2)

Call this a new sum, S, as a function of x:

S(x) = 1 + x^2 + x^4 + x^6 + … = 1/(1-x^2)

Let’s return to our three initial series, S1, S2, and S3, and use our new formula, S = 1/(1-x^2), to reconsider them

For S1, when we plug 1/2 in for x, we get that S1 = 4/3. No problems here.

For S2, when we plug 1 in for x, we get that S2 = 1/0 = ∞ (or S2 is undefined). OK, this agrees with our notion that S2 diverges. (This is a pathological value of x, which someone else can expand on.)

For S3, when we plug in 2 for x, we get that S3 = -1/3, so we have that

1 + 4 + 16 + 64 + 256 + … = -1/3

This is peculiar, since we expect this to diverge as well. Let’s plot the first few terms of the summation S(x) to see if we can get a more intuitive feel for what’s happening. Specifically, let’s plot:

1
1 + x^2
1 + x^2 + x^4
1 + x^2 + x^4 + x^6
1 + x^2 + x^4 + x^6 + x^8

plot1

As we add more and more terms, our plot becomes a sort of infinitely high bowl bounded at x = -1 and x =1. So, this plot would suggest that our summation has a value for x such that -1 < x <1, and is undefined otherwise.

But let’s now plot 1/(1-x^2)

plot2

y = 1/(1-x^2) is defined for all real x except for x = -1 and x = 1, and those values “outside” the bowl are negative! At x = -1 and x = 1 are “poles” (also at x = i and x = -i). So x = 1 is really just one of four little blips that needs further mathematical treatment to understand. Again, I can’t really delve more into the topic because I’ve forgotten everything. It seems that there are many people here, though, who can go over this in much greater detail. I do, however, hear controls and systems engineers (who practice a very practical discipline – think feedback, automation, robotics) talk about poles a lot.

Exactly. S1 does not converge. The whole point of this discussion is whether there are reasonable ways to assign sums to series that don’t converge.

S1 is actually a somewhat interesting example because it’s one of the few divergent series that can be assigned a sum using methods that are taught to sophomore and junior math majors. The basic idea is that if you look at the sum of 1 + x + x^2 + x^3 + … on the domain where it does converge, the function you get is so smooth that there’s only one way to extend it to the entire complex plane that doesn’t break the smoothness. Once you do that, 1 - 1 + 1 - 1 + … = 1/2 is immediate.

As an aside, I’ve always found the difficulty that people have with summing divergent series to be fascinating. Going from defining the sum of finitely many numbers to defining the sum of even a single infinite series is a much bigger conceptual leap than going from assigning sums to a small class of infinite series to a slightly larger class of infinite series. Why do people struggle so much more with the latter?

People are tossing the terms “converge” and “diverge” around without prefixing them with necessary adjectives. A series might converge under one method but diverge under another.

There is no one “right” definition of convergence for an infinite sum, so there is no default definition that can be assumed. (Although conditional convergence seems to be what most people are assuming here.)

Think of it this way:

You’re wondering if there is an answer to the question “What is the sum of 1 + 2 + 3 + 4 + 5 + . . .” You’re told that the answer is -1/12. You insist that the answer is ∞. Why do you think that? ∞ isn’t a number you learn in your arithmetic classes early in school. You’re taught about the natural numbers (1, 2, 3, etc.) and how to add two of them. Eventually you’re taught how to add together more than two of them. At no point in those classes are you asked what the sum of an infinite number of them are. At no point in those classes are you told about this mysterious number ∞ which is the sum of 1 + 2 + 3 + 4 + 5 + . . . That’s something that you learn much later. You’re told at that point that there is this number ∞ that you can consider to be the sum of 1 + 2 + 3 + 4 + 5 + . . . “Really?”, you say. “Where do I put it on the number line?” You’re told that it’s not on any place on the number line that you can see. It’s far to the right of all the numbers you see. In fact, it’s beyond any of the natural numbers. So ∞ seems like more of an arbitrary definition than the natural numbers that you learned about before. It’s a useful definition, but it’s still arbitrary.

Everything you know about numbers is arbitrary. What does 1/2 mean? You will claim that that is obvious. Really? Grab that bag of carrots next to you. Show me two carrots. You pull out two of them and show me. Show me five carrots. You pull out five of them and show me. Show me one-half carrots. You pull out one and break it in half. I measure it and say, “No, that’s approximately .4856 of a carrot.” “But,” you say, “everyone knows what 1/2 means. We learn it in our early arithmetic classes.” Well, you learn it eventually. At first, when you’re learning division, you’re told that 1 divided by 2 equals 0 with a remainder of 1. It’s only later that you’re told that there’s a definition so that 1 divided by 2 equals 1/2, and that number has a meaning. Again, it’s a useful definition, but it’s an arbitrary one.

And the same is true for many other sorts of numbers. Negative numbers are an arbitrary definition. They are useful, but it’s something that has to be defined. You have to be told that you can go to the left on the number line instead of just to the right. Real numbers are an arbitrary definition. You have to be told what the number 3.14159265359… means. It’s a very useful definition, but it’s arbitrary. Complex numbers are an arbitrary definition. Quaternions are an arbitrary definition. There are many ways of extending the numbers that are useful but arbitrary definitions.

Even the natural numbers are an arbitrary definition. “What?”, you say. “Everybody knows what the natural numbers are. We all have to count.” Well, no, not everybody does. There are tribes even today living far from what we think of civilization that don’t count and don’t have words for numbers in their languages. They get along fine without having to count. They have words for “more” and “less,” but they don’t even have words for “one” and “two.” (I won’t call these societies primitive because I don’t think it’s accurate. They survive just fine. They have a lot of complex ideas in their languages and cultures.) We think of the natural numbers as “natural” because they are learned so early in our society. Most people learn to count these days at around the age of three. There’s nothing natural in that.

The same is true for the idea that 1 + 2 + 3 + 4 + 5 + . . . = -1/12. For certain fields of study, it turns out to be a useful definition. Within those fields, it’s a consistent definition. It’s not useful for anything you may do, but it’s useful within certain parts of physics.

?? I think it’s universally understood that if you say a series converges you mean the sequence of partial sums converges and that is not the same as saying a sequence conditionally converges.

Yes, that’s the definition that I’m familiar with. I wish ftg had tried to clarify or give an example of what he meant by a series converging under one method but diverging under another.

(Series which converge, according to this definition, can be further divided into those which converge absolutely and those which converge conditionally. Colloquially, it’s a matter of whether the series would still converge if you made all its terms positive. More precisely, it’s a matter of whether or not the related series, made up of the sum of the absolute values of the terms of the original series, also converges.)

Then you can get into the distinction between convergent sequences and Cauchy sequences in different metric spaces, but that’s still not exactly different “methods” of convergence.

As discussed in the linked threads, this particular sum usually arises and is dealt with in physics as zeta function regularization. The idea is that the sum \zeta(s) = 1^{-s} + 2^{-s} + 3^{-s} +… is absolutely converges to for Re(s) > 1. The function \zeta can be (uniquely) extended memorphically over the complex plane. When you do so, its value at -1 is -1/12, which “means” that 1 + 2 + 3 + … = -1/12. I put “means” in quotes there because mathematicians don’t sit down and say, “Hmm, what should the sum 1 + 2 + 3 + … equal?” As has been discussed at great length above, there are multiple definitions of the sum (or, if you prefer, multiple functions on sequences that act like sums) of a series; under the obvious one, it’s clearly divergent. That’s not the point, though. Typically, this sum will appear in some other calculation where the other divergences cancel or are dealt with some other clever way, and you can analytic continuation to get a useful result.

I can’t add anything to this.

If you really just plucked this out of thin air, then you’ll be surprised to learn that the critical dimension of bosonic string theory is indeed determined using this sum (well, actually, I think there is more than one argument you can use for that, not all of which use this summation).

It also crops up in more well-trodden parts of physics, such as determining the Casimir force in quantum field theory. Basically, you end up having to sum over infinitely many oscillator modes, and using the ‘trick’ this thread is about—known as ‘zeta function regularization’ if you want to get fancy about it—you can extract a finite value, which agrees with observation.

In the method I’m familiar with, you set up the action of the Virasoro algebra on the set of physical states and show by a straightforward calculation that you can only kill the ghost states if its central charge (which is just the number of dimensions) is exactly 26.

It’s fairly clear that ftg is using “converge”(/“diverge”) to mean “is(/is not) assigned a sum, under whatever summability method (of which there are many beyond the traditional one in terms of arbitrarily close approximation by initial segment partial sums)”.

So, for example, as 1 - 1 + 1 - 1 … = 1/2 with Cesaro summation, ftg would use the phrasing “This converges to 1/2 with Cesaro summation”.

Indeed. Actually, I would say absolutely convergent summations are a very robust and natural notion (that is, the sum of an absolutely convergent collection of unsigned values is simply the supremum of the sum of its finite subcollections, and to sum in the same way a collection of signed values one just adds together the sums of its separately signed components, so long as this does not lead to the ambiguity of adding oppositely signed infinite magnitudes), but what’s particularly odd to me is that people are so happy to accept asymptotics of discretely cut-off initial segment partial sums as the One True Account of summation even for conditionally convergent series, where you already get all the warts of re-arrangement and so on.

Once you’ve moved beyond absolute convergence, there are all kinds of things which are useful in different contexts to consider. But people cling, I think, quite stubbornly to yelling the things that were yelled at them in school, causing a certain unfortunate inertia of perspective.

Yes, that’s a more rigorous way of doing this. FWIW, the (somewhat hand-wavy) argument I had in mind is given by David Tong in his lecture notes.

Can someone talk to me like I’m a 12 year old with brain damage? I can’t even begin to comprehend what posters here are saying. The equation:

1+2+3+4…=1/12

cannot possibly be true. The first number in the equation, 1, is larger than 1/12, and all other numbers added to 1 are larger than 1/12. How can anyone suggest that the sum of an infinite series of numbers, all larger than 1/12 are equal to 1/12?

I would advocate reading the earlier thread, but I understand that it’s long. So as a compromise, I advocate reading only my posts in the earlier thread. :slight_smile:

The sense in which 1 + 2 + 3 + 4 + … = -1/12 is this:

First, consider X = 1 - 1 + 1 - 1 + … Note that X + (X shifted over by one position) = 1 + 0 + 0 + 0 + … = 1. Thus, in some sense, X + X = 1, and thus, X = 1/2.

Now consider Y = 1 - 2 + 3 - 4 + … . Note that Y + (Y shifted over by one position) = 1 - 1 + 1 - 1 + … = X. Thus, in some sense, Y + Y = X, and thus, Y = X/2 = 1/4.

Finally, consider Z = 1 + 2 + 3 + 4 + … Note that Z - Y = 0 + 4 + 0 + 8 + … = (zeros interleaved with 4 * Z). Thus, in some sense, Z - Y = 4Z, and thus, Z = -Y/3 = -1/12.

In contexts where the above reasoning is applicable to what one wants to call summation, we have that 1 + 2 + 3 + 4 + … = -1/12. In other contexts, we don’t.

Note that I’ve said “in some sense” several times in the above argument. That’s because, while we all know how to add and subtract a finite collection of numbers in the ordinary way, when it comes to adding and subtracting an infinite series of numbers, there are many different ways of interpreting what this should mean. Just knowing how to add finitely many numbers doesn’t automatically tell us what it means to add a whole infinite series of them. And when it comes to summation of infinite series, it turns out there’s not just one nice notion of “summation”; there are many different ones, which are nice for different purposes.

One such notion is “Keep adding things up, one by one, starting from the front, and see if the results get closer and closer to some particular value; if so, that value is the sum”. On that account of what summation means, you clearly won’t get any finite answer for 1 + 2 + 3 + 4 + …; since the terms get arbitrarily large, the partial sums will never settle down to a finite value (and certainly not a negative one like -1/12!). They instead, in a natural sense, should be understood as summing to positive infinity. And there’s nothing wrong with this! It’s a very natural account of summation to consider. It’s just not the only account of summation worth thinking about.

We could instead consider other notions of “summation”, including ones designed precisely so that arguments like the one we made at the beginning (which are very natural arguments to make!) counted as legitimate ways to reason about such “summation”. And then, by definition, we will have that 1 + 2 + 3 + 4 + … = -1/12, on such accounts of “summation”.

If you insist that “Keep adding things up and see if the results get closer and closer to some particular value” is the only account of summation you’re interested in, you’ll object to the argument we gave at the beginning, saying “You’re not allowed to do that kind of shifting over and adding to itself and interspersing and so on reasoning all willy-nilly; look at what nonsense it produces!”.

But it can be made sense of (for a more systematic, formal account of series summation of a sort which validates the above manipulations, see my posts in the previous thread starting from #159), and is even fruitful to make sense of, in certain contexts in mathematics, and there is no need to blind ourselves to this insight.