There are many ways to define infinite sums, but some ways are more interesting or useful than others. Under all of the sorts of definitions that are most interesting and useful, 1+2+4+8+… either has no meaningful value at all, or its value is -1.
Sure, you can do that also. It’s no surprise that 2T - T = T.
The fact that you can calculate in one way doesn’t mean it’s a problem that you can also calculate in another way, unless the answers you get actually conflict (the fact that you can compute 1 + 3 + 5 as (1 + 3) + 5 = 4 + 5 = 9 doesn’t mean there’s some problem that you can instead compute 1 + 3 + 5 as 1 + (3 + 5) = 1 + 8 = 9). In this case, there’s no conflict: you can have 2T - T = -1 and 2T - T = T at the same time [and indeed will, whenever you are already inclined to say that T = -1].
Now, as for whether 1 + 2 + 4 + 8 + … “actually” equals -1: It does on some definitions of infinite summation and not on others. It depends on the interpretation you are using, same as the question as to whether kings can actually capture by jumping backwards (yes in checkers, no in chess). Same as the question as to whether you can have between 3 and 4 of something (yes for heights measured in meters, no for counts of sheep). These things aren’t handed down from above; they’re choices we make, for how to express ourselves, and we can investigate and understand many different, related but not equivalent, notions of infinite summation.
There is certainly a natural and useful notion of infinite summation on which 1 + 2 + 4 + 8 + … comes out to +∞. And on this account of what the infinite summation is being used to mean, the problem with the above reasoning will be that 2T - T = ∞ - ∞ is an indeterminate form (the same way 0/0 is an indeterminate form); that difficulties arise in trying to cancel out additions of ∞ by then subtracting ∞ (the same way as difficulties familiarly arise in trying to cancel out multiplication by 0 by then dividing by 0); that you can have T * 2 = T - 1 doesn’t only have the solution T = -1 but also has the solution T = ∞ which cancelling a T term from both sides erroneously ignores (same as that, in suitable familiar contexts, the equation T ^ 2 = T / 7 doesn’t only have the solution T = 1/7 but also has the solution T = 0 which cancelling a T factor from both sides erroneously ignores).
That’s one interpretation available to us, and one which you probably find more intuitive; it tracks what you will more often be using infinite summation to represent.
But it’s not the only interpretation available to us! And there are, in mathematics, many contexts where it is useful to think about notions of infinite summation (and I call them “notions of infinite summation” because they act in many ways very similarly to all the other things we more familiarly call “summation”, even if some of their properties are stranger; this is how language works, in generalizations being given names suggestive of their analogies) such that 1 + 2 + 4 + 8 + … does come out to -1.
One such context is the 2-adic numbers. Another such context is Abel summation. Both of these I’ve written about before on the boards, as have others here and elsewhere, so I will avoid recapping them for now; I will note only that they are both very important concepts in mathematics, interesting and fruitful areas to learn about if math is the sort of thing you like.
Words in bold corrected.
To be clear, in this context, the reason the attempted cancellation of ∞ - ∞ is problematic is because ∞ can be added to different things to produce ∞: ∞ + 5 = ∞ + 8 = ∞ + whatever = ∞, so ∞ - ∞ would have to equal both 5, and 8, and whatever; hence the “indeterminacy” of ∞ - ∞. (Again, same as with handling 0 multiplicatively: the attempted cancellation of 0 / 0 is problematic because 0 can be multiplied by different things to produce 0: 0 * 5 = 0 * 8 = 0 * whatever = 0, so 0 / 0 would have to equal both 5, and 8, and whatever; hence, the “indeterminacy” of 0 / 0)
The two most important rules for divergent summation are linearity and stability (there is another rule called regularity, which just says that our method should give the same answer as regular summation for convergent series, but that’s kinda obvious).
Linearity is the property that if you multiply a series by some constant factor, then the sum is multiplied by the same factor. So if 1+2+4+… = s, then 2+4+8+… = 2s. More mathematically, if we have some summation method S, then S(ax[sub]1[/sub] + ax[sub]2[/sub] + ax[sub]3[/sub] + …) = a*S(x[sub]1[/sub] + x[sub]2[/sub] + x[sub]3[/sub] + …). Furthermore, you can always add two series term-by term, so S(x[sub]1[/sub] + x[sub]2[/sub] + x[sub]3[/sub] + …)+S(y[sub]1[/sub] + y[sub]2[/sub] + y[sub]3[/sub] + …) = S((x[sub]1[/sub]+y[sub]1[/sub]) + (x[sub]2[/sub]+y[sub]2[/sub]) + (x[sub]3[/sub]+y[sub]3[/sub]) + …). Almost all interesting summation methods have this property.
Stability is the property that if you add an element to the front of a series, then the sum changes by that amount. So, S(1+2+4+…) = S(0+1+2+4+…), and S(37+1+2+4+…) = 37 + S(1+2+4+…). Not all divergent sums have this property! But some do, and 1+2+4+… is one of them.
You’ll find that however you apply these two rules to 1+2+4+…, you’ll always find the sum is -1 without contradiction (you have to follow the other rules of math of course, like not dividing by 0). We don’t have to say that the sum is defined, but if it is defined then it will be -1.
Got schooled here and didn’t even notice it. Point taken. Matter of fact, I just re-read this whole thread and realized I’m pretty much talking out my ass. My apologies to the other posters herein.
Another way of thinking about this series is that the sum of n many terms of this series, for finite n, uncontroversially yields 2[sup]n[/sup] - 1, and that we are, let us say, interested in the question: what happens to this formula approaches as n approaches infinity?
Well, one familiar thing to say is that 2[sup]∞[/sup] - 1 = ∞ - 1 = ∞. And in that sense, the series should be considered as having a positively infinite sum. Easy peasy.
Another choice that does get made in some contexts, surprisingly enough, is to say 2[sup]∞[/sup] = 0. Why on earth would we ever say that? Well, in many contexts, it wouldn’t be appropriate, but in some, we might reason as follows: look at the sequence 2[sup]0[/sup], 2[sup]1[/sup], 2[sup]2[/sup], etc., in binary. This becomes 1, 10, 100, 1000, 10000, etc. Each particular place value of digit (or bit, perhaps I should say) eventually becomes and stays forever after zero. All the digits individually approach 0 in the limit, and thus we might want to say the limiting value is indeed …00000 = 0. This is the kind of reasoning which leads us when working in the “2-adics” (i.e., binary numbers whose digits can go on forever to the left, instead of to the right; believe it or not, these are important and useful for analyzing various phenomena, but I’ll have to save the demonstration of their “usefulness” for later) to say that this infinite series sums to 2[sup]∞[/sup] - 1 = 0 - 1 = -1.
In the same way, you might conclude in the 10-adics (i.e., decimal numbers whose digits can go on forever to the left, instead of to the right) that 1 + 10 + 100 + 1000 + … = …111111.0 = -1/9. Why? Because the sum of the first n terms is (10[sup]n[/sup] - 1)/9, and, again, 10[sup]n[/sup] has its digits approach …0000 = 0 as n approaches infinity. Put another way, if you carry out the calculation 9 * …111111 + 1 you get …99999 + 1 = …000000 (via a stream of carries continuing infinitely down to the left) = 0, so it is reasonable, in this context, to take …111111 to be -1/9.
(I said I wouldn’t rehash such things, but, eh, couldn’t resist. I’ll maybe talk about other interpretations such as Abel summation later. But these are all merely particular individual interpretations or summation methods; more generally, any interpretational context that allows such manipulations as in the OP (as noted by Dr. Strangelove, the invoked properties are linearity of summation and stability for this series, plus, as noted above, the ability to determinately subtractively cancel its value from itself (as would be the case if it were assigned a finite value, but not if it were assigned ∞)) must assign it the value -1. (This is basically a tautological statement; it’s just saying “Look, the proof in the OP is a proof… in whatever contexts legitimize its steps”. But tautologies are true.))
It should also be mentioned that this particular series has a very simple and common application: Effectively, this series is equivalent to the binary number …111, where the ones extend infinitely to the left. Well, computers use binary numbers. And while no computer can actually deal with an infinite number of 1s, you can have a number that’s “as many 1s as you can fit in”. And what happens if you take such a number, and then add 1 to it? It rolls over and you get 0, just what you would expect if 1+2+4+8+… = -1. In fact, negative numbers are quite often expressed in exactly this way in computers, precisely because it means you can use the same techniques for adding negative numbers that you use for positives.
EDIT: Wrote this before I saw Indistinguishable’s most recent. But I still covered some ground that he didn’t, so whatever.
Summing divergent series is more a game than a serious pursuit. I don’t know if Euler invented it, but he certainly did a lot of it. Now, aside from the computation in the OP, there are at least two ways of getting to this result. The first is to observe that
1 + r + r^2 + r^3 +… = 1/(1-r). Of course, this depends on |r| < 1, but if you sub r = 2, you get -1. There is actually a serious purpose to this called “analytic continuation” and if you apply this to the above series, you get 1/(1-r) which has a singularity at r = 1, but is otherwise well-defined.
The second way to get his result is to work in a special kind of number system in which 2 is small and powers of 2 are smaller. These would be called the 2-adic numbers. In that system, the series actually converges and it is not hard to see that it converges to -1. These numbers are useful in various places.
Just because I can, HAKMEM Item 154:
(The page I linked to has translations for some of the hacks from PDP-10 assembly into C, which is a gross enough perversion to fit right in in this thread. )
The universe is also a one’s complement machine, but you have to consider all the digits, extending both left and right: let X = …1111.1111… (in binary), and multiply it by 2. The result is X again, so 2X = X, and thus X is zero, and thus a bitpattern of all 1s is zero (“negative zero”, if you like), and thus we are, in that sense, using one’s complement.
Mind you, everything in there could just as well, and perhaps more simply, be cast in terms of looking at increasing powers of 2 and not their sum (i.e., the result of starting with 1 and repeatedly doubling):
If the result remains positive till turning zero, you are on a sign-magnitude machine.
If the result turns negative and then to zero, you are on a twos-complement machine.
If the result turns negative and then to 1, you are on a ones-complement machine.
If the result loops back but not to the initial 1, your machine isn’t binary – the pattern should tell you the base.
Etc.
Thank you, Dr. Strangelove. I’ve been somewhat confused about summing divergent sequences, with Wikipedia more hindrance than help. You’ve synopsized the key criteria succinctly.
For what it’s worth, not all methods for summing divergent sequences necessarily satisfy those properties in general, particularly not stability (as Dr. Strangelove already noted; I’ll give an example below). They’re nice properties that we sometimes are interested in; that we sometimes ask for. But we can do what we want; there are no rules handed down from above.
The techniques (described in other threads) which tell us 1 + 2 + 3 + 4 + … = -1/12, etc., for example, do not treat that series “stably” (if you did treat that series stably, assign it a value which allowed for the usual sorts of subtractive cancellations, and interpret summation linearly, you could conclude 1 = 0, by first subtracting a delayed copy from itself to get 1 + 1 + 1 + 1 + … = 0, and then subtracting a delayed copy of that from itself to get to 1 = 0).
Linearity is more of a bedrock; it’s rarer to feel inclined toward a non-linear summation method (though it happens; e.g., using the Shanks transformation). But still, the only truly inviolable criteria are: do you (or anyone) find it interesting to think about, and do you (or anyone) find it analogous enough to other things called “summation” to think of it as such? Everything else is gravy.
I want to say, the worst misunderstanding about math, which sets in early, the way it is taught, is that math is about fixed hard rules, a particular system of them received from authority, every question having precisely one correct answer accordingly.