An important point, here: It’s not true to say that in the standard assumptions of mathematics, 1 + 2 + 3 + 4 + … = infinity. In the standard assumptions of mathematics, 1 + 2 + 3 + 4 + … isn’t defined at all; it has no meaningful answer. You can extend standard mathematics by saying that there’s a number called “infinity”, and defining it in such a way that that sum gives you infinity, but that’s not the only possible way of extending mathematics, and it’s not even the most useful way (introducing “infinity” as a number ends up breaking a lot of things about mathematics that “ought” to be true).
Well, what does “one plus two plus three, etc.” mean to you? What does the addition of infinitely many terms mean to you, and should we still call that addition, when it is so different in many ways from the more familiar addition of just two things at a time? [Please, do actually answer this question; it’s not merely rhetorical, and I think having a layperson spell out their response to this would be instructive]
This sort of generalization process arises over and over in mathematics. Consider subtraction; we claim “eight minus five is three” because if you make five posts in the morning, then three posts at night, you’ve made eight posts in total. Now what’s “five minus eight”? On one account of what this might mean, the answer is “there’s no such thing”; if I make eight posts in the morning, there’s no quantity of posts I could make at night to bring the day’s posting total to five. But on another, by now quite familiar, account of what this might be discussing, we say the answer is “negative three”. This is a generalization, and the notion of subtraction with this generalization has many different properties than subtraction without this generalization. Some might say we shouldn’t even call this new notion subtraction, and should call it something else more clearly distinct from the old notion to which subtraction properly refers. But the new generalization also has so many archetypal features in common with “proper subtraction” that there’s a really strong motivation to also call it a kind of subtraction; that thinking of it that way can be quite helpful and even fruitful, so long as one is always able, at the end of the day, to remember the differences between “proper” and “generalized” subtraction and which one means when as well.
And so on with everything else in math. We take old ideas, abstract away some properties of particular interest, ask ourselves in what ways we can generalize or modify the concept and still have similar useful properties, and do so. Sometimes we are more and sometimes we are less inclined to refer to the generalization/modification as the same sort of thing as the original concept; this is merely a subjective aesthetic choice. But either way, we have a new nice idea to play with, regardless of exactly how we choose to frame it. [Though the subjective framing can, of course, have consequences for our intuitions, as presentational choices generally do]
Aw, a zombie of my very own!
I’ve read through this thread, and my BRAAAAAAAAAAAIIIIINNNNSSS hurt but I think I understand a little better now…
I’m interested in knowing (in a general way) what applications this has had. Because I remember Dopers “proving” to a guy that it was nonsensical in CCC or CSR.
Oh, I dunno. I mean, I agree with the thrust of this, but I dunno about putting it exactly this way. There’s some question about what we mean by “the standard”.
Certainly, if I wrote “1 + 2 + 3 + 4 + … = infinity” in a paper submitted to a mathematical journal, no one would bat an eye. They would all understand what I meant by this and agree. That being so even if what I meant by this did not involve committing myself to treating the “infinity” on the right hand side as referring to the same sort of thing as the “1”, “2”, etc. on the left hand side. The linguistic standard would appear to validate “1 + 2 + 3 + 4 + … = infinity”.
But as you essentially note, it is just a linguistic convention (I think in general, “convention” is a word which better captures the flavor of what’s going on here than “assumption”; it’s not as though we could be erroneous in our conventions. The worst that could happen is that we find we are motivated to adopt new ones); there may have been a time when (or could be a place where) no one would write it this way, standardly writing instead something like “The sequence of sums 1, 1 + 2, 1 + 2 + 3, 1 + 2 + 3 + 4, …, eventually surpasses and stays forevermore above any particular finite upper bound”. What is meant by this is precisely the same as what is now standardly meant by “1 + 2 + 3 + 4 + … = infinity”; it’s just expressed with a different presentation, which or may not be more helpful for some purposes.
And so on, of course, with “1 + 2 + 3 + 4 + … = -1/12”; some would choose to present this differently, reserving infinite sum notation for one very particular concept which linguistic convention happened to standardize upon at some point, but everyone agrees with the idea it expresses, regardless of how they would present it.
Did you catch the reasoning which kicked off the whole resurrection? Apart from summoning the undead to scavenge upon the flesh of the living, it shouldn’t hurt the brain too much, and I’m curious to what extent it helped answer the question in your OP.
To wit:
(1 + 2 + 3 + 4 + ...)
- (1 - 2 + 3 - 4 + ...)
= 0 + 4 + 0 + 8 + ...
= 4 + 8 + 12 + 16 + ...
= 4 * (1 + 2 + 3 + 4 + ...)
Therefore, subtracting (1 + 2 + 3 + 4 + …) from the end and beginning, and then dividing both sides by -3, we find that (1 + 2 + 3 + 4 + …) = -1/3 * (1 - 2 + 3 - 4 + …), at least on an account of infinite summation that allows this intuitive arithmetic (which may have to differ from the standard account to do so, but which is clearly motivated by the desire to allow these basic arithmetic manipulations). Does that part make sense, at least?
The rest is just showing (1 - 2 + 3 - 4 + …) to go to 1/4, in whatever sense. At the time of the OP, this made sense to you, but it’s perhaps been a long time since then. One way of seeing this is by considering the sequence of partial sums: 0, 1, -1, 2, -2, … . If one takes the successive averages, this becomes 0, 1/2, 0, 1/2, 0, … . And if one takes the averages of those, they will of course approach 1/4; thus, in this sense, (1 - 2 + 3 - 4 + …) goes to 1/4. [Another way of seeing this: 1 + x + x[sup]2[/sup] + x[sup]3[/sup] + … = 1/(1 - x), in the manner demonstrated previously in the thread. Squaring both sides and expanding out the multiplication on the left, we get that 1 + 2x + 3x[sup]2[/sup] + 4x[sup]3[/sup] + … = 1/(1 - x)[sup]2[/sup]. Plugging in x = -1 gives the desired sum]
The one I had in mind is the Surreal Numbers, which were used in analyzing games. There are other ways to analyze the same thing without using surreals, but such is true of many branches of mathematics. See also Nonstandard Analysis.
And yes, it’s been shown many times that the equivalency must hold for the real numbers. I would expect anyone trying to understand the intricacies of this question should know exactly what they can and cannot do using these alternate systems.
I think Indistinguishable’s post (#62) is a better description of how new ‘definitions’ are made, especially those like this summation. It’s more of an extension or generalization of the existing rules than recasting a basic convention.
It occurs to me that we can get (1 - 2 + 3 - 4 + …) to go to 1/4 purely by the same kinds of basic arithmetic as well: 1 - 2 + 3 - 4 + … = 1 - (2 - 3 + 4 - 5 + …) = 1 - (1 - 2 + 3 - 4 + …) - (1 - 1 + 1 - 1 + …) = (1 - 1 + 1 - 1 + …) - (1 - 2 + 3 - 4 + …). Thus, (1 - 2 + 3 - 4 + …) = 1/2 * (1 - 1 + 1 - 1 + …). So the problem reduces to showing that 1 - 1 + 1 - 1 + … = 1/2, which is easy: (1 - 1 + 1 - 1 + …) = 1 - (1 - 1 + 1 - 1 + …).
So, basically, if we choose to allow all the kinds of basic arithmetic which intuitively “make sense” for manipulating infinite series, we can obtain all the counterintuitive results such as 1 + 2 + 3 + 4 + … = -1/12. Whether these intuitive manipulations are formally acceptable on some particular precise account of what infinite summation means depends on, well, the exact details of such an account. But they certainly provide some motivation for seeking accounts of infinite summation which validate them, and give us reason to consider their results as genuine kinds of summation, even in the face of conflict with other accounts of infinite summation which do not allow such arithmetic manipulations and which invalidate these results. We can’t have all the properties we might like simultaneously in one account of infinite summation, so we study various different accounts with different nice properties.
Yes, the arithmetic of it makes sense, and your post here neatly distills that arithmetic down and makes it even more understandable.
The only step in your first resurrection post that made me think “huh?” was this:
(1[sup]r[/sup] + 2[sup]r[/sup] + 3[sup]r[/sup] + 4[sup]r[/sup] + …) - (1[sup]r[/sup] - 2[sup]r[/sup] + 3[sup]r[/sup] - 4[sup]r[/sup] + …) = 2 * (2[sup]r[/sup] + 4[sup]r[/sup] + 6[sup]r[/sup] + 8[sup]r[/sup] + …) = 2[sup]1 + r[/sup] * (1[sup]r[/sup] + 2[sup]r[/sup] + 3[sup]r[/sup] + 4[sup]r[/sup] + …)
but that’s just because my maths is a little rusty and I couldn’t see why (2 * n[sup]r[/sup]) can be written as (2[sup]1 + r[/sup] * (n/2)[sup]r[/sup]). Plugging in the numbers, I can see that it is the same though.
What makes this obvious to you?
Does the reasoning go like this?
“To take the sum of an infinite series, if it has a limit, find the limit. That’s the sum. That’s the sum because it’s the value the series approaches but never gets bigger than. So also the sum of a finite series is the value the finite series approaches but never gets bigger than.”
But if that’s the reasoning, it doesn’t make it obvious that the sum of 1 + 2 + 3… is infinity. For it’s not obvious that the series approaches infinitity. What does it mean for one value to approach another? It means the difference between them gets smaller and smaller. But the difference between 1 and infinitiy, 1+2 and infinity, 1+2+3 and infinity, and so on, isn’t getting smaller and smaller. The series isn’t obviously “approaching” anything.
So if that was your reasoning (and I think it is how most laypeople probably would reason, including myself) then we see it doesn’t make it obvious that the sum of the series should be infinity. But was that not your reasoning?
In general, by taking the sum of an infinite series, we are doing something that the gradeschool notion of “summation” doesn’t provide guidance for.
Question for the thread: Do physicists use some particular definition of infinite summation? Different ones in different contexts? None at all? Er what?
I haven’t re-read the thread, but do I recall correctly that we’ve seen that you could perform a different set of apparently intuitive operations and arrive at a completely different answer than -1/12? (I mean other than infinity.)
Indeed, couldn’t there be an infinite-summation function that follows no apprently intuitive arithmetic at all and yet is perfectly consistent?
Or is there something that makes -1/12 uniquely “right” for some reason?
Oh, and since folks have asked about practical applications, there are systems of representing numbers used in computers where -1 is represented by a maximum-length string of 1s, consistent with 1 + 2 + 4 + 8 + 16 + … = -1. I’m pretty sure that the folks who designed those systems did it that way not to be consistent with Ramanujan analytic continuations, but because such a system of representing numbers makes it easier to perform some operations.
And I see my question had already been answered prior to the zombification. People pointed to renormalization in physics. (Though I seem to recall some physicists sometimes think that renormalization is some kind of cheating.)
It is.
Well, in as much as anything is ever equal to infinity. Saying it equals infinity is really kind of a short hand for saying “If we take 1 + 2 + 3 + … + n and keep increasing the value of n, the value of the sum keeps increasing too, such that it will eventually be bigger than any real number.”
People in the thread aren’t really saying that 1 + 2 + 3 + … = -1/12. Not in the literal way that you’re interpreting it. They’re saying: Look, we have some function, and for certain inputs it gives us a finite value. For other inputs, it doesn’t. For one of those inputs, it gives us 1 + 2 + 3 + …, which is of course infinity. However, there’s some other function that matches up perfectly with our first function and has some other special properties too. And for the same input where our first function gives 1 + 2 + 3 + …, this new function gives -1/12
I hope that phrased in that way you can see how this is not such a crazy thing to say.
In my experience, renormalization in physics is usually done by assuming there’s some hidden cutoff that acts as an upper limit of integration, not by assigning values to the conventionally-divergent integrals. The real trick is in rearranging the calculations in such a way that the value of the upper limit doesn’t matter, since we don’t know what it is.
Oh, absolutely, I’m afraid… E.g., 1 + 2 + 3 + 4 + … = 1 + (2 + 3 + 4 + 5 + …) = 1 + (1 + 1 + 1 + 1 + …) + (1 + 2 + 3 + 4 + …) = 1 + 1 + (1 + 1 + 1 + 1 + …) + (1 + 2 + 3 + 4 + …) = [using what we just established] 1 + (1 + 2 + 3 + 4 + …). And as we all know, 1 + 2 + 3 + 4 + … = -1/12. Therefore, uh, 1 + 2 + 3 + 4 + … = 1 + -1/12 = 11/12. Oh dear.
Alas, the sense in which divergent Zeta series (1[sup]s[/sup] + 2[sup]s[/sup] + 3[sup]s[/sup] + …) add up to their purported values is much, much more context-specific and fragile than the sense in which the corresponding non-convergent alternating Eta series (1[sup]2[/sup] - 2[sup]s[/sup] + 3[sup]s[/sup] - …) add up to their purported values. [As another example, the sense in which 1 + 1 + 1 + 1 + … = -1/2 is much, much, much more context-specific and fragile than 1 - 1 + 1 - 1 + … = 1/2. For example, right from the very supposition that 1 + 2 + 3 + 4 + … has a finite value, we might be led to conclude that 1 + 1 + 1 + 1 + … = (1 + 2 + 3 + 4 + …) - (0 + 1 + 2 + 3 + …) = 0]
I suppose that depends on what one means by a “perfectly consistent infinite-summation function”.
It’s uniquely right to precisely the tautological extent that any account respecting (directly or indirectly) the specific reasoning given above will produce it. Alas, this isn’t very much uniqueness. But sometimes, in some particular contexts, it happens to be precisely the relevant uniqueness.
In a way, it might be clearest not to say “1 + 2 + 3 + 4 + … = -1/12”, but instead, “1[sup]s[/sup] + 2[sup]s[/sup] + 3[sup]s[/sup] + 4[sup]s[/sup] + … = -1/12 when s = 1”. Just as one might say “(x[sup]2[/sup] + 3x - 4)/(x - 1) = 5 when x = 1”, but would refrain from saying “0/0 = 5”.
Of course, the clearest thing of all is to never claim anything without giving the full reasoning behind it, making explicit precisely the bounds of how one can use the result [and thus what is being meant by it]. The appropriate level of compromise between this and the other extreme is a context-dependent choice.
Well, that’s one way (and the way that makes the most physical sense to me), but there are others, right?
dimensional regularization always reminded me a bit of analytic continuation, at least vaguely.
Zeta function regularization which I only just found out about via wikipedia sounds even more closely related.
I may be blurring the distinction between regularization and renormalization here… to be honest that’s always confused me a bit. Regularization is what you’re doing to your integrals when you’re renormalizing? Or something.
I haven’t read the book in question, but presumably Ramanujan did write it in that way. As pointed out, he wasn’t a classically trained mathematician, and I also suspect that he enjoyed being a bit cryptic. Certainly his mentor G. H. Hardy thought there was something almost mystical about him.
Dimensional regularisation does depend on analytic continuation and zeta function regularisation is exactly the best known case where this result arises in physics. And there are lots of regularisation procedures, all with different advantages and disadvantages.
To me regularisation and renormalisation are best thought of as distinct issues. At least in most approaches, you impose a regularisation procedure and then renormalise. However, there are cases where renormalisation is required even though nothing is infinite and so regularisation doesn’t come into it.
Granted, in practice the technicalities usually become so complicated that the boundary between the two procedures is blurred - the details of how you renormalise depend on the regularisation - but I find trying to be clear about the distinction between them almost invariably helpful when thinking about such matters.
And if we’re discussing Hardy, Ramanujan and divergent series, it is worth remembering that Hardy went on to write one of the great monographs on summing divergent series.