1+2+3+4+....infinity = -1/12?

Looking at this segment on Numberphile;

http://www.numberphile.com/videos/analytical_continuation1.html

explains how an infinite summation results in a most counter-intuitive answer.

Is the logic sound? Or is it simply the mathematical equivalent of stating “I’m my own Grampa”?

For example, I don’t see how they pull S1 and S2 out of thin air.

Some previous threads:

I’m not saying that it’s not correct, but that video doesn’t convince me. There were to many places where he did things with no explanation.

I strongly encourage you to read the previous threads. They do a good job explaining it. (the more recent thread was just updated a few days ago. Is it really appropriate to have a new one right now?)

There’s a second video where they did a more in depth proof

I’ll look at the other threads as well as that second video. This one may end up closed.

The second of the above threads was resurrected and discussed at some length, just within the last few days. Check that out; it goes into quite some detail.

-1/12 is obviously less than zero. How can you add up a series of numbers, each of which is greater than zero, and arrive at a sum which is less than zero?

Isn’t there a rule of logic that says that if your premise leads you to an impossible conclusion then you’ve proven your premise is incorrect?

Common sense often goes out the window when you are dealing with infinities. Read the other threads for more details.

Read the other threads, especially the second one. Then ask if you have further questions.

In math you have to define your terms very carefully. Sometimes things that appear nonsensical are in fact true.

It all makes sense within the proper context. That context is specialized to deal with infinite summations and is no more constrained to “common sense” or “obviousness” than relativity or quantum mechanics.

That other thread was active this very week. Why go through it all again when it’s right there?

My point is that maybe these results are evidence that the rules for dealing with infinities are wrong.

No. Infinities are not something that works like normal math. You shouldn’t expect it act that way. They work just fine.

Look at the two sides of this.

One the one side there is you, who doesn’t know mathematics and hasn’t even bothered to go look at the evidence.

On the other side there are all the mathematicians in the world who for hundreds of years have agreed upon this, supplied proofs of its truth, and used the general case in gazillions of instances.

We’ve had similar gazillions of threads on math and science in which people essentially take the first side. I’ve never understood how that’s possible. I don’t understand it in this case either.

There are many different things that can be meant by the term “sum”. You use several of them on a regular basis, and probably aren’t even aware that you’re using several different concepts. Under some concepts of “sum”, there’s no such thing as a sum of an infinite series at all. Under other concepts, there’s such a thing as a sum for some infinite series (such as 1 + 1/2 + 1/4 + 1/8 + 1/16 + …), but not for other infinite series (such as 1 + 2 + 3 + 4 + 5 + …). Under yet other concepts of “sum”, there is also a sum for that series, and that sum is -1/12.

I’d be interested in learning about all those different definitions. Any way you could give a quick - ahem - summation? (Serious query, not just an excuse for the word play.)

Here’s what Indistinguishable says in the old thread in post #82:


Here, let me illustrate a taxonomy of summation methods relevant to the discussion, using the geometric series 1 + b + b2 + b3 + … as a guiding example:

  1. Absolute convergence: These are the nicest sums there are; a series is absolutely convergent to S if, no matter what order you add up its terms in (even things like “First take half of the third term, then the first term, then the other half of the third term, then the second term, …”), you approach S in the limit as you go on. Absolutely convergent summation is invariant under re-ordering, invariant under shifting, and will never turn a series of positives into a negative. It basically has every nice property you can think of. The geometric series will be absolutely convergent just in case |b| < 1. Every summation method below extends absolute convergence.

  2. Traditional summation: This is what you have pounded in your head at school; to take the traditional sum of a series, one one imposes a cutoff point before which every term is brought in with full strength and after which every term is brought in at zero strength. This produces an absolutely convergent approximation, and the traditional sum is the limit of these absolutely convergent approximations as the cutoff point is moved so each term approaches full strength. Traditional summation is not invariant under re-ordering, but is invariant under shifting, and will never turn a series of positives into a negative. In terms of geometric series, traditional summation adds nothing new to absolute convergence.

  3. Abel summation: This rectifies the discrete cutoff problems of traditional summation; to take the Abel sum of a series, one brings in its terms with exponentially decaying strength (actually, the exponentiality doesn’t matter, and we would get the same results using any sufficiently smooth decay function, but I’ll leave that discussion for later…). This often produces an absolutely convergent approximation, and the Abel sum is the limit of these absolutely convergent approximations as the decay rate is lessened so each term approaches full strength. Abel summation is not invariant under re-ordering, but is invariant under shifting, and will never turn a series of positives into a negative. In terms of geometric series, Abel summation adds summability in the case where |b| = 1 but b is not 1. Abel summation extends traditional summation, and every summation method below extends Abel summation.

4: “Extra-Abel” summation: It may be that the approximations used in Abel summation are absolutely convergent for quick decay rates, but not for slow decay rates, so that one can’t take the limit as the decay rate approaches zero. But the function giving the value of the approximations in terms of the decay rate may be smoothly extendible with finite values all the way from its behavior at quick decay rates through slower decay rates up to a value at no decay, giving us what I’ll call the “Extra-Abel” sum. Extra-Abel summation is not invariant under re-ordering, is invariant under shifting, and will never turn a positive into a negative. In terms of geometric series, Extra-Abel summation adds summability in the case where |b| > 1 but b is not > 1.

  1. “Overflow” summation: It may be that the smooth extension used in Extra-Abel summation starts to blow up to infinity at some decay rate before zero, in which case, the Extra-Abel summation as I am using the term will not be defined. But it may be that the blow up is only because our smooth function is the ratio of two other smooth functions, and these other smooth functions extend all the way to a well-defined ratio at the decay rate of zero (in jargon, we can switch from using “analytic” extension to “meromorphic” extension). The value so obtained will be the overflow sum. Overflow summation obviously extends Extra-Abel summation. Overflow summation is not invariant under re-ordering, is invariant under shifting, and may turn positives into a negative. In terms of geometric series, overflow summation adds summability in the case where |b| > 1, unreservedly.

  2. “Zeta” summation: Going back to Abel summation, it may be that the approximations used in Abel summation are absolutely convergent for all nonzero decay rates, but that as the decay rate approaches zero, these approximations blow up towards infinity. However, the function giving the value of the approximations in terms of the (logarithmic) decay rate may still have a finite degree zero term at zero decay, giving us the Zeta sum. Zeta summation is not invariant under re-ordering, is not invariant under shifting, and may turn positives into a negative. In terms of geometric series, zeta summation adds (to Abel summation) summability in the case where b = 1.

  3. SDMB summation: We can combine both the ideas of overflow summation and zeta summation, allowing ourselves to extend the approximation function of Abel summation to a value at zero decay using both meromorphic extension and degree zero term extraction. The resulting summation method consistently, systematically, rigorously handles everything we’ve discussed in these threads. This summation method is not invariant under re-ordering, not invariant under shifting, and may turn positives into a negative. In terms of geometric series, we will have summability for all b.


Read that thread:

There is no point in us rewriting everything we already said there.

I’ve read the other threads and seen the videos and I’ve done my best to wrap my head around the idea.

I only have one question: does this value for the sum connect in any way to the physical world where by using it makes predictions that are verified by experiment, while using a different value would not?

I know it’s been mentioned in the context of string theory (which can only at this point be theoretical) and quantum mechanics. The latter is for more interesting to me because of the huge amount of experimental data we have. But I don’t think I’ve seen a specific explanation of how exactly using this sum fits in. (actually I probably wouldn’t be able to follow the explanation anyway. If the expert answer is simply “yes” I’ll take it.)

All I can say is to take a look at the Wikipedia entry:

and this essay:

http://www.nottingham.ac.uk/~ppzap4/response.html

This equation is an example of analytic continuation:

Do a search on a question like “How is analytic continuation used in physics?” and you’ll find many references to its usefulness in various fields.

Ok, I’ve tried. I read as much as I could from those links, admittedly my eyes glazing over most of the time. In the first wikipedia link there’s a short section on “physics” which I paid particular attention to.

The second link, the response from the numberphile guy:

And then he gives 4 links, 2 of which I attempted to read (2 didn’t open) but as promised the math was indeed “deep and complicated” and far beyond my ability to grasp.

And it’s generally the same deal with the wiki page on Analytic Continuation. I mean, I get some hints of insight into how this is useful in math and physics and I’m not in any way attempting to deny that nor am I in any way being cynical. But is there really no simple yes/no answer to the question I posed? Because in all those links and past threads, that’s what I was searching for and couldn’t find.