1+2+3+4+... = -1/12. In what sense?

What seems to be going on is
[ul]
[li]G(s) is defined for values of s in A[/li][li]G(s) is undefined for values of s in B[/li][li]F(s) = G(s) for values of s in A[/li][li]Therefore, we’ll state that G(s) == F(s) for values of s in B[/li][/ul]
It seems weird and uncalled for. Even if it has practical applications, I think the terminology is bad. G(s) is by definition undefined for s in B. Why define it to some value? Since F(s) is defined for s in B, just stick with F(s). Don’t go and retroactively state that G(s) is defined to some absurd value for s in B. Just work with F(s).

To put it another way, I think all we can say is
[ol]
[li]sum(1/n^s, n=1,…,infinity) has a defined value for s > 1[/li][li]Zeta(s) = sum(1/n^s, n=1,…,infinity) for s > 1[/li][li]Zeta(-1) = -1/12[/li][/ol]
Concluding from the above that sum(n, n=1,…,infinity) = -1/12 is ridiculous in my opinion. For the practical applications of the Zeta function, just say Zeta(-1) = -1/12. No need to involve the infinite sum.

Consider X = 1 - 1 + 1 -1 +1 -1 …

In one “intuitive manipulation” of X, we get
X = (1-1) + (1-1) + (1-1) … = 0

In another “intuitive manipulation” of X, we get
X = 1 -(1-1) - (1-1) - (1-1) = 1

So, does that mean that by my “intuitive manipulations” of infinite summations I have just proven that 1 = 0?

Because, if your “intuitive manipulations” of 1+2+3+4+… show that it is equal to -1/12, then my “intuitive manipulations” 1-1+1-1… show that 1=0.

Basically, neither of those summations are defined (in some circles, 1+2+3+4+… is thought to be defined to the value infinity) and playing around with them with simple “intuitive manipulations”, trying to force them to have a defined finite value, only results in ridiculous results.

Good answer, bonzer. Thanks.

The whole 1+2+3+…=-1/12 thing isn’t really the point though. Of course it’s “really” a divergent sum, but the idea of doing an analytic continuation of the series is nevertheless an interesting and useful one.

Sure. This is the most explicit and clearest way to speak. There would never be any confusion if we always spoke this way in full.

Except… we violate this principle and go ahead with the conflation all the time.

We start out saying “x[sup]n[/sup] is the product of n many copies of x”. Ok, this is well defined for positive integers n, and undefined on other exponents. Then we think “Hm, well a nice a smooth generalization of this preserving the key properties would be to let x[sup]0[/sup] = 1 and x[sup]-n[/sup] = 1/x[sup]n[/sup]”. Ok, now it’s defined for integers n. Then we say “Hell, let’s let x[sup]a/b[/sup] be the bth root of x[sup]a[/sup]”. Now it’s defined for rational exponents. Then we say “You know, we could quite conveniently allow x[sup]r[/sup] for arbitrary real r to be the limit as r is approached by rationals”. Ah, we have arbitrary real exponents now. “But wait… how about x[sup]z[/sup] = exp(ln(x) * z)? This works for complex exponents as well!”. Now, exponentiation with a complex exponent is a tremendously different beast from our starting idea of repeated multiplication, which was very much undefined outside of natural number exponents. Yet, we still write it the same way and even often think of it as the same thing.

Or, as another example, we start out saying “cos(x) is the ratio between the length of the adjacent leg to an angle x and the hypotenuse in a right triangle”. Ok, this is well defined for x in [0, 90 degrees), and undefined otherwise. Then we think “Hm, well, a nice generalization of this would be to say that cos(x) is the first co-ordinate of the vector resulting from rotating <1, 0> by the angle x in the direction towards <0, 1>.”. Alright, now this is defined for non-negative real x. Then we say “Hm, it would be a convenient and smooth generalization to consider negative x as meaning rotation in the other direction”. Ok, now cos(x) is defined for arbitrary real x. Then we say “Hm, you know, cos(x) = the average of exp(ix) and exp(-ix). Maybe we should adopt that as our definition”. Now we have cos(x) defined for arbitrary complex numbers x. Of course, this new generalized account of cosine is far from our original account, which was undefined on all these new values. Yet we still refer to it by the same name.

In this particular case, I do agree that there is more possibility for confusion than value in going around saying “1 + 2 + 3 + 4 + … = -1/12” without further clarification. (The manipulations shown are the ones which provide the sense in which this is true, but, of course, that it is only a very narrow sense tailored to those manipulations; I as well acknowledged and provided similar manipulations giving other values for such sums, and even gave what I think would be a clearer way of stating these results). Then again, I also think there is a fair amount of possibility for confusion in even saying “1 + 1/2 + 1/4 + 1/8 + … = 1” without further clarification, yet no one bats an eye at this. And even more confusion lurks in making standard statements like “e[sup]iπ[/sup] = -1” or “The set of even integers is the same size as the set of all integers” or “Four-dimensional rotation has six degrees of freedom” without being explicit about what one means by those words. I think in general, it would be great to err more on the side of being explicit about how ordinary language is being used in a non-ordinary way in mathematics. But that isn’t, never has been, and can’t be the practice entirely; there will always be cases where we, in our conventional shorthand, do not explicitly distinguish between a restricted definition of some concept and a more broadly applicable generalization of the same.

Sure, on an account of the mathematics of infinite summation which respects both these manipulations. This is essentially the Eilenberg-Mazur swindle, and it does indeed have its uses.

But there are of course lots of contexts in which you would not want to be able to draw a conclusion. Well, alright, in those contexts the relevant accounts of infinite summation are different ones which do not respect these manipulations. Different concepts for different purposes. But they all exist and they all have some sense. You rightly point out the dangers in conflating concepts which, though similarly motivated, are distinct, but this does not mean we have to pretend that, of the myriad of natural interpretations of any given preformal concept, there is a single one to be considered legitimately worthy of the name in any and all contexts.

The delta-epsilon based definition of the meaning of an infinite summation is certainly an important one, but in cases of potential confusion, it requires just as much explicit clarification as any other account of infinite summation. It is not solely entitled to the throne. One could just as well claim that 1 - 1/2 + 1/3 - 1/4 + … = ln(2) is a ridiculous result, on a more restricted account of the meaning of infinite summation than the standard one [e.g., allowing for only absolutely convergent sums], or that 1 - 1 + 1 - 1 + … = 1/2 is an indisputably correct result, on a less restricted account than the standard one [e.g., Cesaro summation]. But none of these positions are of course correct; the only correct position is that these equations each express a truth of some kind, the exact sense of which can and perhaps should be given more explicitly in each case.

No. That most certainly was not my reasoning! I do not need to have concepts like “limit” or “summation” or, indeed, “series,” in order to know that 1+2+3+4…=∞. Where I come from that is all high school math, not elementary school math (actually, primary school maths, but I figure here I ought to try to write in American). All I need is the basic, intuitive primary school concept of infinity: that it is the number that you would get to if you went on counting up forever (which, of course, you can’t), and that there is no number bigger than infinity, so if you keep n adding to it, it does not actually get any bigger. Counting is the same as doing 1+1+1+1+… and stating the result at each step. Obviously if doing 1+1+1+1+… forever would get you to infinity, the biggest possible number, then 1+2+3+4+… forever would get you there too (my elementary school-age mind can, I think, just about grasp the fact that, despite appearances, it will not actually get you to infinity any faster, became forever is, you know, forever).
Seriously, the problem here is oversophisticated thinking, burying what is really a simple issue in a mass of technicalities (though Indistinguishable has done a brilliant job of stripping them away and explaining the issue in elementary terms). I think we have answer now (and it is pretty much what I, and I think Colophon to, originally expected): in “1+2+3+4…=-1/12” the symbols “…” and “+” are being used to mean something subtly different from what they usually mean, and jointly define a function that is different from the function they define in their usual (at least for laypeople) senses.

It is certainly very remarkable that such an apparently subtle difference in the meanings of those symbols, gives rise, in this instance (i.e., when applied to the set of the natural numbers), to such a radical difference in the results you get when applying the two different functions that they jointly define. Especially since (if I am understanding aright), when applied to many or most infinite sets of numbers, the two functions produce identical results. However, the appearance of paradox is the result of definitional slippage. There are two different functions being talked about. One is summing (as an elementary school student might understand it) and the other really isn’t, and should not really be called such (as the mathematicians in the thread now appear to be conceding).

So your reasoning was that since 2 = 1+1, 3 = 1+1+1, and so on, then 1+2+3… = 1+1+1…, and to figure out what 1+1+1… is you just count forever, and counting forever (as we all know from gradeschool) gets you infinity.

But notice that Indistinguishable’s derivation of -1/12 as the answer relied on nothing other than a gradeschool understanding of arithmetic as well.

They both seem to rely on the same notion of addition, but give very different answers.

I don’t think that what’s happening is that he’s using a different understanding of addition than “the obvious one”. You’re both using “the obvious one” and you’re coming up with different answers. What’s happening is the “obvious” notion of addition simply turns out to be inadequate to infinite series.

I don’t think that is true. His proof relied upon the

Now, I don’t find that immediately objectionable, because, in contrast to the original example, I do not have any real grade school level intuitions at all about what, if anything 1 - 2 + 3 - 4 + 5 - 6 + … might sum to. So, my grade school self might be ready to accept that my betters know what they are doing here, and accept the premise that this series sums to 1/4. However, in doing so, I rather think I will have allowed the equivocation on “sum” to be smuggled into the argument.

In fact, 1 - 2 + 3 - 4 + 5 - 6 + … does not add up to 1/4 in the same sense that 1 + 2 + 3 + 4 … adds up to infinity. In that latter sense, it looks to me as though 1 - 2 + 3 - 4 + 5 - 6 + … probably does not add up to anything determinate at all (but if it did, it would be an integer, not a fraction, since you never get fractional results from merely adding and subtracting integers, even if you keep on doing it for ever).

The thing is that different intuitive properties of addition come into conflict when considering infinite sums, thus necessitating some revision of the concept. For instance, it’s certainly very intuitive that summation should be associative, i.e. that if you count things onto a heap, it doesn’t matter if you count first four, then three, then two or first three, then two, then four, to put it very grade school. And it’s also very intuitive that when you keep on adding more and more, the heap eventually gets bigger than every number you could name, and that certainly sounds like infinity.

However, as has been pointed out already in this thread, if you combine both (grade-school) intuitions, you get inconsistent results. So, something’s gotta give! And you could well insist that in this case, well, so much for associativity; that’s indeed what’s usually done. However, if one is really fond of associativity, one might well point to this notion of summation as being one that’s different from the original one, and demand – with equal right! – to instead revoke the intuition that the sum ‘goes to infinity’. There’s no fundamental difference between the two possibilities – neither is a priori ‘more right’ than the other. The choice depends on what else you want your notion of summation to accomplish for you; and then, there may well be contexts in which it’s appropriate for 1 + 2 + 3 + … to equal -1/12, just as there are contexts in which it’s appropriate for it to simply diverge.

There are also dangers in saying that something “equals infinity”. For instance, let’s say that we say that 1 + 2 + 3 + 4 + … = infinity. Well, then, what’s 2 + 3 + 4 + 5 + …? I guess that equals infinity, too. But then that means that 1 + infinity = infinity, and using that grade-school math, we can just subtract off infinity from both sides: 1 + infinity - infinity = infinity - infinity. And again from grade-school math, we know that anything minus itself is zero, so we get 1 + 0 = 0. Obviously, introducing infinity as a number breaks grade-school math.

I think grade-school math people would have a much easier time accepting the explanation “infinity is not a number like the others, so we can’t subtract it from itself like we can with finite numbers” than accepting “1+2+3+4 + … = -1/12”.

There is a huge difference between this example and the infinite sum example in the OP.

In the case of exponents, x^n = xxx…*x (n times), and it is defined only for integer n. For non-integer n, x^n is not really defined. So, we can make up a generalization that defines it over the reals, and if we do it properly, it “works”, that is, the value of x^n agrees when n is an integer, and when n is not an integer, the value is (a) a value that is between x^k and x^m where k and m are the nearest integers and (b) the value does not contradict basic math facts.

In the case of infinite sums, when we have A = 1+2+3+4+… = -1/12, that value contradicts basic math facts like

  1. If you add positive numbers, you can never get a negative number, no matter how many you add.

  2. Since B = 1+1/2+1/4+… = 2
    and since each term in A is >= each term in B,
    we should have A >= B

    But if we say that A = -1/12, and B=2, then A < B
    That is, we have an infinite sum whose terms are term-by-term larger than another infinite sum, and yet the former sum is less than the latter sum.

    Why would anyone accept a generalization to infinite sums where this holds?
    I could think of more issues with this generalization, but I’ll stop at (1) and (2) for now.

In general, generalizations of ideas/functions/theorems/etc are great and have many practical applications.

But this particular generalization fails on so many levels, it amazes me that mathematicians really consider this a generalization of infinite sums.

One “basic math fact” is that a positive number raised to any power is positive. Yet we go ahead and say that e[sup]iπ[/sup] = -1. The “basic math fact” held true of our original account of exponentiation but does not hold true of our generalized exponentiation. That’s alright. Depending on what it is that we are intending to use “exponentiation” for, we can decide if we want to keep the basic math fact or if we want to use this generalized exponentiation (or perhaps even something else entirely).

One “basic math fact” is that the cosine of any number is always between -1 and 1. Yet we go ahead and say that cos(i) is (e + 1/e)/2, approximately 1.54. The “basic math fact” held true of our original account of cosine, but does not hold true of our generalized cosine. That’s alright. Depending on what it is that we are intending to use “cosine” for, we can decide if we want to keep the basic math fact or if we want to use this generalized cosine (or perhaps even something else entirely).

As it happens, I don’t think mathematicians are generally so loose as to say “1 + 2 + 3 + 4 + … = -1/12” just like that, without further clarification. But if, for some reason, it were to become conventional to put the result that way, leaving the precise definition of such a summation implicit, this would be no great tragedy. “Why would anyone accept a generalization to infinite sums where this holds?” They would do so in contexts where they found the benefits of the new properties to outweigh the disadvantages of the new properties (i.e., where they considered, for whatever reason, more relevant the rules which lead to 1 + 2 + 3 + 4 … = -1/12 than the rules which would otherwise entail its negation).

Consider: one basic math fact is that adding rationals produces rationals. Yet 1 + 1/4 + 1/9 + 1/16 + … = π[sup]2[/sup]/6. Is this ridiculous? Well, for some purposes, one would not want to say this; one would want an account of summation on which adding rationals could only come out to rationals. But for some purposes, one does want to say precisely this, and that’s alright too.

These are good counter-examples, but they do have one important difference with the “1 + 2 + 3 + 4 + … = -1/12” case.

In your examples, the range of the function was thought to be something, and the generalized function simply has a different range (in addition to a different domain)

In the “1 + 2 + 3 + 4 + … = -1/12” case, basic properties of math are in dispute, not simply the range of a function. For example, if we have

A = sum(x[k], k = 1…inf)
B = sum(y[k], k = 1…inf)

and x[k] >= y[k] forall k then A >= B

But for
A = 1 + 2 + 3 + 4 + … = -1/12
B = 1 + 1/2 + 1/4 + 1/8 + … = 2

we have x[k] >= y[k] forall k and yet A < B.

I don’t know why we would want to give up this property of sums.

OK, Zeta(-1) = -1/12. That’s fine. Use it in any theorem or application you want. Just don’t say that this means that 1+2+3+4+…=-1/12.

One might just as well object “Why should I give up the basic math facts of commutativity of arbitrary sums?”. They will not want to consider 1 - 1/2 + 1/3 - 1/4 + 1/5 - 1/6 + 1/7 - 1/8 + … to come out to ln(2) and 1 + 1/3 - 1/2 + 1/5 + 1/7 - 1/4 + 1/9 + 1/11 - 1/6 + … to come out to ln(2 * sqrt(2)). Yet these are indeed precisely what we standardly say about these series, using the language of infinite summation to describe a concept which does not have such commutativity in general but which does have other often appealing and useful properties. You apparently draw the line for using summation language at preservation of inequalities; others could draw the line at commutativity and associativity. It’s just a subjective question of what strikes you as close enough to the archetype to allow use of the same name and what strikes you as having departed too far.

Whatever: I agree that the clearest thing to say is “Look: If we follow these manipulations [insert description of manipulations here], the expression ‘1 + 2 + 3 + 4 + …’ can be manipulated into ‘-1/12’. Any interpretation of these expressions which validates those manipulations will make the two equal (for example, but not limited to, interpreting the series by the analytic continuation of the zeta function via its relation to the eta function, in turn taken as Abel summable)”. The clearest thing is always to be fully explicit; to never give a theorem without the rules of its proof, for it is the latter which give the former its meaning. This applies to every statement in mathematics, though.

Inevitably, people will speak in shorthand and jargon. Sometimes that shorthand is more confusing than helpful; alas. In fact, I even agree with you that it is more confusing than helpful in this particular case! You won’t catch me ever saying “1 + 2 + 3 + 4 + … = -1/12” outside of this thread, at least not without carefully making explicit the sense in which I mean it. But this is a linguistic or pedagogical issue; not truly an issue of mathematics.

Sure, but if you say “infinity isn’t a number”, then you’re left without any answer for 1+2+3+4+… And if there isn’t any existing answer, then you can hardly say that an answer one comes up with using a new method is inconsistent with it.

Really, it’s just the “extending the domain of a function” thing again: The concept of a series can be regarded as a function from sequences of numbers to an individual number. So, for instance, series(1+ 1/2 + 1/4 + 1/8 + …) = 2, and series(1 + 2 + 3 + 4 + …) is undefined, since 1 + 2 + 3 + 4 + … is outside of the domain of the series() function. So we re-define the series function to extend its domain, and we happen to do so in such a way that series(1 + 2 + 3 + 4 + …) = -1/12.

It is no worse than zero. Treating zero as a normal number also breaks grade school math - so you ban dividing by zero. That does not mean it is not a useful number, even a necessary one, in other respects. Why is infinity any worse? Certain operations with it are not permitted, but it is still meaningful and useful in other contexts. Does the fact that you must not divide by zero in any way impugn the fact that 1-1=0?

I think this is a key point. Similarly, the “value” of 0^0 depends on whether you’re seeking the limit of 0^x, x^0 or x^x.

I will now demonstrate that the positive integers sum instead to -1/10 ! This is probably just a silly mathematical fallacy, like 10 = 12, but it still might be interesting to hear mathematicians’ comments.

Let
A = 1 + 2 + 3 + 4 + 5 + …
so
2A = 2 + 4 + 6 + 8 + 10 + …
3A = 3 + 6 + 9 + 12 + 15 + …
6A = 6 + 12 + 18 + 24 + 30 + …
12A = 5+7 + 11+13 + 17+19 + …
Combining these, we get
2A + 3A + 12A + 1 = A + 6A
which reduces algebraically to
A = -1/10

There are variations of the above, and many (not all) variations also lead to A = -1/10. For example,
12A = 5+7 + 10+14 + 17+19 + 22+26 + …
24A = 11+13 + 23+25 + 35+37 + …
Combine to get
3A + 4A + 12A + 24A + 3 = A + 12A
(where the 12A on the left is the form just give, 12A on right from earlier). This yields
43A + 3 = 13A
or again, A = -1/10.

I do not claim that -1/10, indstead of -1/12, is the “correct” answer. It just seems curious that this other derivation exists.

Zero is special, but it’s still way more numbery than infinity. Er, if “numbery” were a word. (I could say “numerical”, but I feel like it means more than I want it to here.)

At least with zero, if I have some expression that equals zero, and I have another expression that equals zero, then the two expressions equal each other. This is not necessarily so for two expressions that “equal” infinity.

Anyhow, if we say 1 + 2 + 3 + … = infinity, essentially what we[sup]*[/sup] mean is that if you pick some really big number, and then you add up the numbers in that series, eventually the sum of the series will surpass that really big number, and in fact this is true for any number you could have picked, no matter how big. That’s what “equals infinity” really means. Whereas if we say 1 + 2 + (-3) = 0 what we really mean is “you can add up all those numbers and you get zero”. Which is quite a different thing.

[sup]*[/sup]: here “we” means “anyone who’s at least taken a semester of real anaylsis”, I guess.