1+2+3+4+... = -1/12. In what sense?

Man, I’ve been making a lot of typos, lately. Sorry 'bout that.

To give an idea of why analytic continuation is useful, let’s look at a different example of extending the domain of a function. When you first learned about exponentiation, you learned it as iterated multiplication. So, for instance, x[sup]2[/sup] = xx, and x[sup]3[/sup] = xxx, and so on. But now, suppose someone asks you what x[sup]2.5[/sup] is. It seems like it should be a reasonable question to ask, but if we just use the definition of exponentiation as iterated multiplication, there’s no way to make sense of it. So what we do is we figure out what properties of exponentiation we’d like to keep (for instance, the property that x[sup]a[/sup]x[sup]b[/sup] = x[sup]a + b[/sup]), and then we come up with a new definition. “Exponentiation” as defined under the new definition should satisfy the same interesting properties as “exponentiation” under the old definition, and for every point where the definitions overlap, they should give the same value. For instance, one way to define exponentiation with real numbers is to say that x[sup]a[/sup] = exp(aln(x)) (of course, you also have to then define the functions exp() and ln(), but that’s not too hard). This new definition has the same properties as the old one, since exp(aln(x)) * exp(bln(x)) = exp((a+b)(ln(x)), and it matches up with the old definition, since, for instance, exp(2ln(x)) = xx and exp(3ln(x)) = xx*x. So the new definition does everything that the old definition did, and then some. So the new definition is more useful, and so we use it instead of the old one.

Analytic continuation is a similar process, in that it takes an old function, and makes a new function that does everything the old function did, and then some.

Of course, Chronos, your example is kind of amusing given the fact that a function like x[sup]z[/sup] doesn’t have an analytic continuation to the complex plane (unless x = e.) But as an analogy of extending the domain of a function (from the integers to the reals, instead of from the reals to the complex numbers), it’s a nice one.

Or x = any other particular positive constant. As a unary function from reals to reals, such a \r -> x[sup]r[/sup] [by this notation, I mean a function which takes in one real argument r and outputs one real argument x[sup]r[/sup]] always has an analytic continuation to the entire complex plane (just use the fact that x[sup]r[/sup] = e[sup]ln(x)*r[/sup] in conjunction with the fact MikeS referenced, that \r -> e[sup]r[/sup] has an analytic continuation to the entire complex plane).

(Incidentally, when you hear people speak about the “amazing and beautiful theorem” that e[sup]iπ[/sup] = -1, you may wonder just what it means to take e to an imaginary power; you may even perhaps wonder if this is more of a definition than a theorem, and thus be confused as to what’s so great about that. Well, now you probably understand just what it’s all about: if you take the real function \r -> e[sup]r[/sup] and take its analytic continuation to the entire complex plane, then the value of the resulting function at iπ is -1).

Anyway, considering x[sup]z[/sup], what MikeS may have been getting at is essentially that, if we fix the exponent and let the base vary instead, then we can get a function on the nonnegative reals with no (single-valued) analytic continuation to the entire complex plane. For example, consider \r -> r[sup]0.5[/sup]. On the nonnegative reals, this generates “principal square roots”; of course, every (nonzero) number has two complex square roots, but if they’re real, then we can designate one of them as positive and principal, and the other as negative and nonprincipal. The problem is, there’s no clean way to extend this positive/negative system to the entire complex plane, a problem which manifests itself in the following phenomenon: consider a circle in the complex plane of radius r. If you start at the real number r and trace continuously around this circle, at each point assigning new values in a natural way to the function \r -> r[sup]0.5[/sup] in an attempt to extend it analytically, once you’ve completed one revolution of the circle, you end up back where you started but with the negation of the value you started with (i.e., you’ve found yourself flipped across from the principal square root to the nonprincipal square root). As a result, there’s no singly-valued analytic continuation of that function to the entire complex plane. There is, however, in a natural sense a multiply-valued analytic continuation, which always outputs both square roots, which brings us into the whole business of branch cuts, but I’ll leave further explanation of that topic for either another time or another person…

Except, isn’t ln() multivalued on the complex plane?

Ah, I thought this might have been the source of MikeS’s confused comment. Yes, of course ln is multivalued in the complex context, but it is single-valued as a function from positive reals to reals, which is all we need in this case. If you let k be the unique real natural log of the positive real b, then the complex function \z -> exp(k * z) is an analytic continuation to the entire complex plane of the real function \r -> b[sup]r[/sup].

Of course. You did specify “positive x”, which implies “real” as well.

I will concur with that. It’s a great read.

You know, this has confused me for years. Thanks for the elucidation!

Indistinguishable, Chronos, Ultrafilter, and others, thanks for your responses to my lists of questions. Very nicely explained! (Indistinguishable, your posts were especially helpful.)

Out of curiosity, has the summing function we’ve been talking about–the one which maps “1 + 1 + 1 + 1 +…” to “-1/2”–found any application in physics or elsewhere? Nothing much rests on the answer to this question for me, I’m just curious to know.

-FrL-

There are a couple things we’ve been talking about here. The first is methods for summing divergent series, which I believe do have applications in physics (I’ll let Chronos explain specifics). The second is analytic continuation, which is very important for complex analysis in general. Analytic continuation is only one of many techniques for summing divergent series–for instance, see the link to Cesàro means that I posted earlier.

I’ve heard that the specific sum 1 + 2 + 3 + … = -1/12 is used somewhere in the String Model, but then, it seems like every mathematical result conceivable is used somewhere in the String Model. I’m afraid I can’t help with specifics, though.

My knowledge in this area is severely lacking, but would I be correct in stating that the area to look at, for applications of these summability techniques to physics, is renormalization?

The clearest explanation I’ve seen is also in Baez’ writings, like the link aktep posted. here it is.

This thread is ancient, but with the relaxed policy on zombie threads, and my having perhaps gotten better at explaining things over the years, I figure I may as well deliver now the layman’s explanation I should have given three years ago:

To wit, you already understand the double-averaging sense in which the alternating series 1 - 2 + 3 - 4 + 5 - 6 + … can be considered to sum to 1/4. All that’s left is a bit of straightforward arithmetic connecting such non-alternating and alternating series:

(1[sup]r[/sup] + 2[sup]r[/sup] + 3[sup]r[/sup] + 4[sup]r[/sup] + …) - (1[sup]r[/sup] - 2[sup]r[/sup] + 3[sup]r[/sup] - 4[sup]r[/sup] + …) = 2 * (2[sup]r[/sup] + 4[sup]r[/sup] + 6[sup]r[/sup] + 8[sup]r[/sup] + …) = 2[sup]1 + r[/sup] * (1[sup]r[/sup] + 2[sup]r[/sup] + 3[sup]r[/sup] + 4[sup]r[/sup] + …).

Therefore, (1[sup]r[/sup] + 2[sup]r[/sup] + 3[sup]r[/sup] + 4[sup]r[/sup] + …) = 1/(1 - 2[sup]1 + r[/sup]) * (1[sup]r[/sup] - 2[sup]r[/sup] + 3[sup]r[/sup] - 4[sup]r[/sup] + …). [In jargon terms, this is the connection between the Riemann zeta and Dirichlet eta]. Plugging in r = 1, we have that 1 + 2 + 3 + 4 + … = -1/3 * (1 - 2 + 3 - 4 + …) = -1/3 * 1/4 = -1/12, answering the OP.

(Similarly, plugging in r = 0, we have that 1 + 1 + 1 + 1 + … = -1 * (1 - 1 + 1 - 1 + …) = -1 * 1/2 [using averaging to evaluate the alternating sum] = -1/2, as came up later in the thread as well.)

:confused:

OK, I hope you smart guys won’t mind addressing a complete mathematical ignoramus here, but my elementary-school level of arithmetical understanding tells me quite clearly that 1+2+3+4+…=∞ (that’s infinity, in case a browser can’t handle the symbol properly), and certainly can’t be anything negative or fractional.

Are the numerals, or the + sign, or the = sign or maybe the “…” being used in some special, non-standard sense here? Or is my browser not displaying some vital characters, or what the hell else is going on? (Or were they lying to me in elementary school?)

Did you read the rest of the thread? That’s the same question the OP asked and which we answered.

Yes, and I did not find any answer that I understood, or that even seemed to be addressing the question.

Let me put it another way.
Why is 1+2+3+4… not straightforwardly and unequivocally equal to infinity? How can it come to anything negative or fractional (or, come to that, non-infinite)?

How is anything beyond elementary-school level arithmetic even relevant to the “equation” quoted by the OP?

The only reasonable answer I can imagine to these questions would involve saying that the symbols either mean something different from usual, or that some symbol has been omitted, but nobody said anything like that. Well, Grey did, but he got slapped down smartish.)

1+2+3+4 (without the dots) equals 10, right? Or were they lying to me at elementary school about that too?

Then you didn’t look hard enough. (Grey’s post was fine, incidentally; I think the posts which “slapped him down” were either taking him too literally or misunderstanding what he was saying). The point being:

It is… on one account of what it means to take such an infinite summation.

On other accounts of what such an infinite summation could mean, it can equal other things. And we can consider those as well; so long as we’re clear in our discussions about what we mean, the more interesting phenomena to study, the better.

In elementary school, you perhaps learnt to add finite lists of numbers. When in elementary school did you ever deal with adding a whole infinite series of numbers?

What do you think the sum of an infinite series of numbers should mean, exactly? Do you have a clear idea? Is there room for you to consider more than one different interpretation of such a notion?

Yes, the infinite summation in this context is being interpreted in a manner which is somewhat different from the usual interpretation you are taking. (As mentioned in all of the above posts in this thread…) Not so different that there’s no good reason to still think of it as summation, mind you; just different enough that it doesn’t have all the same properties you might expect (e.g., this notion allows for a sum of infinitely many positive numbers to be negative).

No, they weren’t lying. That’s on the money… (on some account of what those terms mean, just as with everything else; it just happens to be the case that we are much more rarely motivated to consider different interpretations of this finite sum than we are of the OP’s example of an infinite sum)

Look, apart from Grey’s posts and the immediate, dismissive responses to them, the discussion very quickly became totally impenetrable to a non-mathematician like me. The link posted by akep starts off by talking about k-coloring and k-pointing of sets. Chronos starts talking about things being “smooth on the complex plane” and “the Riemann zeta function.” From there, things get worse (and mostly, so far as I can follow at all, seem to be about correcting or refining details of Chronos’ answer). I don’t understand this jargon, and I don’t particularly want or expect people on these boards to be able to explain it to me. What I want to know is why any of this technical stuff is even relevant to what appears to be a very simple, clear and straightforward mathematical expression given in the OP. I don’t have much hope of understanding the actual math, but I do not see why that should preclude me from understanding what is (presumably, unless you are all just talking bollocks) the underlying conceptual issue of why there should be more than one sense for the expression in question.

To be fair to you, your recent post, that seems to have revived the zombie, does not hide behind technicalities and jargon. But . . .

Apart from the jargony aside, there is nothing conceptually beyond me in that. Being lousy at math, I don’t actually follow it, but that is my problem, not yours (and no doubt I could follow it if I really made a big effort). However, to me, it looks for all the world like one of those “joke” proofs I was shown in high school math, where you prove something like 1=2 by using a lot of unnecessary steps to obfuscate the fact that the proof contains some error, such as a division by zero. I will take your word for it that there is no such elementary or deliberate error in play here, but the question remains, why go through this complicated dance when the real answer is obviously infinity? I mean, you could put in a bunch of extra steps between 1+1 and the result 2, but why would you?

Or, to echo Frylock, upthread, why is the fact that this way you get the result -1/12 rather than infinity, not taken to be evidence that there is an error (no doubt something much more subtle and deep than dividing by zero) in this alleged proof?

OK, I think you are saying that on these other accounts, the “…” (or, possibly, the +) is being used to mean something different from what I take it to mean (and which I am fairly confident most people would take it to mean). Is there any chance of an explanation, though, of why someone would want to put a different interpretation on it (beyond handwaving like “since folks like analytic functions” or “the concept of the new notion of sum has some family resemblance with the conventional concept of sums”)? I mean, why would it not be better to just use a different symbol if you mean something different? Mathematicians like things to be precise and unambiguous, don’t they?

Well, I would not claim that the concept of infinity is entirely clear to me (for one thing, I know that there are supposed to be different versions of infinity, some more infinite that others, and I accept but don’t more than very vaguely understand that), but I don’t think that anything like that relevant here (it is certainly clear enough to me that no type of infinity is equal to minus a twelfth). That difficulty aside, yes, the notion of the sum of an infinite series seems pretty clear and unambiguous to me. Maybe it contains obscurities I have not fathomed, but it sure does not look like it.

Yes, there is room in my head for that (luckily, it is not all filled up with knowledge of math :)). That is why I am here. I am not trying to be snarky. I really want to understand. The challenge is for you guys who think you understand it to explain the basic conceptual point of why there should be any room for different interpretations (not the mathematical details of what those interpretations might be, I don’t expect that) in plain English.

njtt, look at the ways that an infinite sum is different from a finite sum. When we say that 1 + 2 + 3 + 4 = 10, we’re saying anybody with a knowledge of ordinary arithmetic can solve the problem in a finite amount of time. Furthermore, because of the laws of arithmetic, the summation can be done in various ways. We can add it like this:

1 + (2 + (3 +4))

or like this:

((1 + 2) + 3) + 4

or several other ways.

We can also add it in other orders like this:

4 + 3 + 2 + 1

or this:

2 + 1 + 3 + 4

In other words, finite addition is both associative and commutative.

But what does it even mean for there to be an answer to the infinite addition?:

1 + 1/2 + 1/4 + 1/8 + 1/16 + . . .

You can’t say that it’s obvious just from the usual definition of finite addition. By the usual definition of infinite addition, that sum is 2. But what does that mean? It means that if you add the numbers in the given order, the sum will get closer and closer to 2. You have to add them in the given order though. Suppose that you were to say, "Well, I don’t want to add in the given order. I want to add in my own order. I want to add it like this:

1 + 1/4 + 1/16 + 1/64 + . . . + 1/2 + 1/8 + 1/32 + 1/128 + . . ."

If you were to add it that way, you could say that the sum doesn’t get closer and closer to 2. The sum, as far as you can take it, gets closer and closer to 1 and 1/3. That’s because you never get around to adding the second part of the sum. So you have to have a different definition for addition when you do infinite addition.

Notice that we could say that the infinite sum:

1 + 1/2 + 1/4 + 1/8 + 1/16 + . . .

could be split up into the two infinite sums:

1 + 1/4 + 1/16 + 1/64 + 1/128 + . . .

The first sum adds up to 1 and 1/3 and the second sum adds up to 2/3, so the two infinite additions give the same answer when added together as when split apart.

Now look at this sum:

1 + 2 + 3 + 4 + 5 + . . .

You can’t use the same rule as before to define this infinite sum. You don’t get closer and closer to anything. Now, it’s true that according to the standard definition in this case, the sum of this is ∞, since the sum increases without bound. But that’s a new definition. Nothing in the standard definition of finite addition or infinite addition (when the sum converges toward a finite number) tells you this.

Now look at this sum:

1 + (-2) + 3 + (-4) + 5 + (-6) + . . .

You can’t use the rules of finite addition to come up with this sum. You can’t use the rule of what the sums are converging toward, because they aren’t converging toward a single number. You can’t say that the sum is ∞ or -∞, because the sum isn’t growing upward or downward without bound. The sum is bouncing back and forth between 1 and (-1). You have to come up with a new definition for the sum. One way is to say that no sum exists. That’s a definition though, not something that’s clear from previous definitions.

So when you ask how this can be true:

1 + 2 + 3 + 4 + 5 + . . . = (-1/12)

The answer is that it’s a new definition for the meaning of infinite sums. You want to know how this can contradict the sum that you know of:

1 + 2 + 3 + 4 + 5 + . . . = ∞.

It’s because it’s using a different, older definition for infinite sums. It’s possible to use different definitions in different parts of mathematics. It’s like asking what the answer to this question is:

11 + 11 = ?

The answer can be 22 or it can be 1001 (or it can be many other things). It depends which number base you’re using. In decimal numbers, the answer is 22. In binary numbers, the answer is 1001. If you’re going to understand mathematics, you have to understand that the definitions are different in different parts of the field.