Because infinity is not a number, and you shouldn’t expect it to act like a number.
jtgain, you seem to be asserting that the sum 1 + 2 + 3 + 4 + . . . is something absolutely everyone knows. Indeed, you seem to be asserting that it’s something that absolutely everyone knows by pure intuition without having to be taught it. But the fact that it equals ∞ is something that is usually only learned (generally in late adolescence) by people who take mathematics courses up to algebra at least.
What you think of as utterly obvious mathematical facts are culturally bound. There are some cultures in which there are few (and perhaps no) words for numbers, because they don’t even find it necessary to count things. There are cultures with counting numbers in which people don’t find it necessary to add them. Algebra is only about a thousand years old. A clearly defined concept of infinity is only about four hundred years old. It was only at that point that it was decided that having the sum 1 + 2 + 3 + 4 + . . . equal ∞ became a useful definition.
It was only a hundred or so years later that it was pointed out that it was useful to say that 1 + 2 + 3 + 4 + . . . = -1/12 for certain subjects in mathematics (and later in physics). Yes, the statement that 1 + 2 + 3 + 4 + . . . = ∞ still holds in other areas of mathematics. Indeed, the statement holds for virtually any area except the limited ones we have been talking about. Rather than say that the numbers mean something different or the idea of addition means something different, it would be best to say that the idea of infinite summation means something different in those few fields of mathematics and physics. Really, there’s no point in people with no interest in those few fields worrying about this different definition of infinite summation. It turns out to be useful to solve certain problems in certain areas of physics, but for most other purposes it’s not worth thinking about.
Yes! Finally, you stumbled upon the correct answer!
Mathematics is not arithmetic. You learned arithmetic is school. Arithmetic is often actually antithetical to understanding mathematics. This is one of the cases.
After you unlearn arithmetic and start learning mathematics, you can join in with the rest of the mathematicians and speak their language.
My understanding (meager as it is) is this: there are different ways to add up a series of numbers that result in different sums. These different ways are equally valid in their own contexts to the “normal” addition and summation that you and I use.
Here’s an analogy that might help illuminate the point. Suppose a man owns a small farm. And on his farm he owns one cow, two pigs, and three chickens (e-i-e-i-o). The market value of a cow is $200, a pig is $120, and a chicken is $20.
So if you asked the farmer how much livestock he had, he could tell you he had six animals or he could tell you he had five hundred dollars worth of livestock. Saying one cow, two pigs, and three chickens adds up to six animals is the normal way of adding. Saying one cow, two pigs, and three chickens adds up to five hundred dollars isn’t normal addition but it’s a valid summation of the series.
And these alternative summations can be useful in situations where normal summations wouldn’t be. Suppose the farmer’s neighbour has a tractor he wants to buy and the tractor is worth five hundred dollars. The farmer doesn’t have any money so he offers to trade his livestock for the tractor. The farmer can’t simply offer to trade six animals for the tractor: six chickens would be worth less than the tractor and six cows would be worth more than the tractor. So it’s not clear if six animals is equal to one tractor. But is you sum up the livestock by their market value you arrive at five hundred dollars and that sum is what the tractor is worth.
Consider the alternating harmonic series:
1 - 1/2 + 1/3 - 1/4 + 1/5 - 1/6 + …
We learn in calculus that this series is conditionally convergent and it sums to ln(2) ~ 0.693147181.
We can add up the first 100 terms (in Excel or some such) to get the partial sum 0.688172179 which, while not quite confirming this, at least gives us some reason to believe it is true.
Now consider the series is that is exactly the terms above rearranged in the following manner. Take the first positive term then the first two negative terms, then the second positive term followed by the next two negative, and so on. It starts out like:
1 - 1/2 - 1/4 + 1/3 - 1/6 - 1/8 + …
I can group terms as follows:
(1 - 1/2) - 1/4 + (1/3 - 1/6) - 1/8 + …
And resolve parentheses:
1/2 - 1/4 + 1/6 - 1/8 + …
And factor out 1/2:
1/2 (1 - 1/2 + 1/3 - 1/4 + …) = 1/2 (ln(2))
To show that this series, which is just a rearrangement of the alternating harmonic series should produce a sum that is half as large.
That seems crazy, but if I generate the first 100 terms of the rearranged series and sum I get 0.357739777 which agrees with my algebra above.
I could easily check both series partial sums for as many terms as I liked and the results would confirm that the second series seems to be converging to to half the sum of the first.
So when infinity is involved, we can indeed use two different methods (here the order in which we add terms), that produce different results that are both correct.
Not only that, but with a conditionally-convergent series, it’s possible to re-order the terms to get literally any result you want.
Q = 1 + 1 + 1 + 1 + 1 + 1 ....
subtract Q from each side (and shift!)
Q - Q = 1 + 1 + 1 + 1 + 1 + 1 ...
- 1 + 1 + 1 + 1 + 1 ...
----------------------
0 = 1 + 0 + 0 + 0 + 0 + 0 ...
0 = 1
Someone please show me where the logic I used is inconsistent with the logic used in the video.
Take a look at the video on 1-1+1-1+… that the OP’s video refers/links to. It starts with the exact thing you’re talking about: that 1-1+1-1+… can be 0 or 1, depending on how you evaluate it.
We are talking about two different things. 1-1+1-1+1 is not in my equation. And it doesn’t have to be 0 = 1; i can make 0 = 2 or 0 = 987 or 743 = 1,000,000 using the logic set forth in those videos.
That depends on how you’re interpreting “Q - Q.” If you’re doing the subtraction term-by-term, it is indeed 1-1 + 1-1 + 1-1 + …
If, on the other hand, you’re assuming Q has a numerical value, and you’re subtracting that value from itself to get 0, you’re dealing with an “infinity minus infinity” kind of situation, which is indetermine: it can have different values (or lack thereof) depending on the details of what’s going on (how you interpret them, what “rules” you play by, etc.)
I don’t care to hitch my wagon to Numberphile’s videos, but I will say that there is a logic to these sums which can be made perfectly rigorous (e.g., the “zeta-summation” technique quoted above). You may be extrapolating the “logic” behind these sums as more permissive than need be. In “zeta-summation”, we do not have that arbitrary series’ sums are invariant under shifting; rather, it is only under certain conditions (mainly, Abel-summability) that this invariance is guaranteed.
Taking Q(x) = 1 + x + x[sup]2[/sup] + …, your demonstration shows that, as in the limit as x approaches 1, Q(x) - xQ(x) = 1, even though Q(x) - Q(x) = 0. Which is indeed true, as Q(x) = 1/(1 - x). We find that these two quantities, which we would naively expect to be the same as x approaches 1, in fact stay apart. Just far enough for us to then note that expanding Q(x) as a power series in log(x) gives a degree 0 term of 1/2, while expanding xQ(x) as a power series in log(x) gives a degree 0 term of -1/2 [and expanding x[sup]2[/sup]Q(x) as a power series in log(x) gives a degree 0 term of -3/2, etc.], which we might in some mood take as giving us the sum 1 + 1 + 1 + … (on different choices of starting index). So it is.
One can attach a perfectly rigorous, consistent logic to these sums if one cares to think about it instead of dismissing it out of hand. It can be fun and interesting to do so.
But it seems to me that is the question. WHY did we learn in calculus that it sums to ln(2)? Why didn’t we learn that it sums to:
That depends on what you mean by “correct.” The rearrangement is no longer the same series. You’ve gotten two different results that are both correct, but they’re correct answers to different questions.
Perhaps “correct” was not the right term to use.
The purpose of my post was to illustrate that things we learned in Mrs. Haugen’s class in elementary school no longer hold true with infinity in the mix. In particular, I meant to give a fairly “concrete” and “verifiable” example that the order in which we add an infinite number of terms can give different results in some circumstances.
When I taught calc 2, I did indeed show that reordering the terms gave a different result to underline the “conditional” part of “conditionally convergent”.
But Q does have a numerical value: 0.
n + n + n + n + n … = 0, where n equals any integer.
Proof:
Z = 1 + 2 + 3 + 4 + 5 + 6 ...
(Z also equals -1/12, by the video's proof)
subtract Z from each side (shifty!)
Z - Z = 1 + 2 + 3 + 4 + 5 + 6 + 7 ...
- 1 + 2 + 3 + 4 ...
-------------------------
-1/12 - (-1/12) = 1 + 2 + 3 + 3 + 3 + 3 + 3 ...
0 = 3 + 3 + 3 + 3 + 3 + 3 + 3 ...
works for any integer
Oh, this got lost in the shuffle:
Here’s a few of them:
a: “sum” is an operation which takes as arguments two integers and returns an integer. For instance, sum_a(2,3) = 5
b: “sum” is an operation which takes as arguments two rational numbers and returns a rational number. For instance, sum_b(1/3, 1/4) = 7/12
c: “sum” is an operation which takes as arguments two real numbers, and returns a real number. For instance, sum_c(pi,e) = 5.859874…
d: “sum” is an operation which takes as arguments two complex numbers, and returns a complex number. For instance, sum_d((1.5,1.7),(3.2,5.4)) = (3.7,7.1)
OK, so far, you may be objecting “But those are all the same thing! It’s all just sums on complex numbers, and the others are just special cases of that!”. But the real numbers are not actually a subset of the complex numbers, nor are the integers a subset of the reals. We like to pretend that they are, and in fact the integers are very closely equivalent to a subset of the reals, so that we can usually get away with pretending this… but when it really matters, like in computer programming, we find that this fails. The number “1” is different from the number “1/1”, which is also different from the number (1.0,0.0). They’re treated in different ways by some functions, and even represented differently inside the computer: “1” is a single number internally, while “1/1” and “(1.0,0.0)” are both constructed from a pair of numbers. And we treat that pair of numbers very differently in both cases: sum(1/3,1/4) is not 2/7. So I’d have to create a separate “sum” function for each of those kinds of sums.
But that’s only a start. We also have variants of a-d with other numbers of arguments:
e: “sum” is an operation which takes as arguments any finite set of integers and returns an integer. For instance, sum_e({2,3,6}) = 11, and sum_e({}) = 0
f: “sum” is an operation which takes as arguments any finite set of rational numbers and returns a rational number
g: “sum” is an operation which takes as arguments any finite set of real numbers and returns a real number
h: “sum” is an operation which takes as arguments any finite set of complex numbers and returns a complex number
Now, because the first four versions of “sum” are all commutative and associative, it’s not too hard to figure out what’s meant by these new multi-argument sums, but that’s only because “sum” (in any form) has those properties: You’d be baffled if I asked you for difference(2,3,5) or quotient(4.3,6.2,7.9).
OK, so that covers finite sums. What about infinite sums? We can define those, too:
i: “sum” is an operation that takes as arguments an infinite sequence of rational numbers, and either returns a real number, or fails to return anything. For instance, sum_i(1/2,1/4,1/8,1/16,1/32…) = 1.0, and sum_i(1/2,1/3,1/4,1/5,1/6…) fails to return.
j: “sum” is an operation that takes as arguments an infinite sequence of real numbers, and either returns a real number, or fails to return anything.
I could also define complex equivalents, but you get the point. We start to notice a few odd things here, though: First, all of our previous definitions of “sum” returned a value of the same type as our arguments, but here we have a sum that can take rational numbers as arguments, but returns a real number. Second, we don’t have any extension of integer summation, here: Even though we can sum an arbitrary finite number of integers, we can’t sum an infinite number of them. Third, I had to specify that the argument was a sequence of numbers, not just a set: Order matters, even though it’s a crucial property of “simple” sums that they’re commutative and associative. And fourth, while some sequences of rational or real numbers will give us a valid value returned, some others will just plain fail. These sorts of sums are, all told, very different from all the other concepts of summation we have.
Given all that, is it any surprise to find yet one more form of summation, which is also different from the other sorts of summation? It’s not even as different as the i and j forms are: It fails less often, and you can do it with integers or things that look like them as well as with nonwhole numbers.
Monocracy, I feel like you are ignoring Indistinguishable’s posts that explain that there is a rigorous way to logically, consistently handle these divergent sums.
Your claims do not explode this theory, but, when looked at under the lens of this rigor, actually demonstrate its logical consistency.
I think it’s a bit disingenuous to imply that the limitations of computer architecture mean that R is not a proper subset of C. Of course the computing notion of sums is a whole different ball of wax than the mathematical notion of sums. Computers really aren’t that good at computing. But they’re fast. 
Anyone ready to place bets yet on how many posts this thread will end up with? (Hint: -1/12 probably isn’t the answer.)