Is infinity a destination or a journey?

Can you explain this with a little more detail (particularly the italicised portion). With pictures if possible (jk).

Sure. Suppose we have a summation method S. If S is stable, then:
S(a0, a1, a2, …) = a0 + S(a1, a2, a3, …)

Obviously this is true for all convergent series: 1 + 1/2 + 1/4 + … = 1 + (1/2 + 1/4 + 1/8 + …) = 2. However, it’s not necessarily true for divergent series, so you have to be careful.

1+2+4+… is stable. It’s also linear, which means we can take out a constant factor from all the terms. Therefore:
S(1, 2, 4, …) = 1 + S(2, 4, 8, …) = 1 + 2*S(1, 2, 4, …)

We can just plug in an algebraic variable and get s=1+2s, which gives s=-1.

Now, I don’t know enough to prove when a series is stable. But in some cases, it’s easy to see when it’s not, like with the 1+1+1+… example. You’ll very quickly run into a contradiction when doing any kind of shifting manipulation. Since 1+2+4+… is stable, though, you can add and subtract terms to your heart’s content (well, any finite number of terms).

You can still do term-by-term addition with unstable sequences, but it’s hard to get anywhere without shifting. So we’re allowed to say:
A = 1+2+3+… = -1/12
B = 1+1+1+… = -1/2
A+B = 2+3+4+… = -7/12

…but A+B is unstable, so turning that back into something we can relate to A (by adding a 1 at the beginning) is against the rules. It looks weird but doesn’t cause any contradictions.

If it helps, this business of re-defining mathematical operations is done all the time, and you’ve encountered it multiple times in your education. For instance, consider exponentiation. When you first learned about exponentiation, you learned it as x^n meaning x multiplied by itself n times. So, for instance, x^2 is xx, and x^5 is xxxx*x.

But eventually, you start seeing things like x^2.5 . What does that mean? Multiply x by itself two… and a half? times? That’s clearly nonsense. But does that mean that we reject the notion of a fractional exponent? No, we just re-define what we mean by exponentiation.

How do we know how to re-define it? Well, by this time we’ve already learned several useful properties of exponentiation, like x^(m+n) = (x^m)*(x^n). And we’d like it if all of those useful properties continued to work for our new definition. Plus, of course, when we do plug in an integer exponent, we’d like to get the same answer as we did with the old definition. It turns out, in fact, that if our new exponentiation function is to have those two properties, there’s only one possible way to extend the definition. This is so useful that we just adopt the new definition, and call that “exponentiation”, and don’t worry about the fact that we ever defined it in a different way.

These summation methods are the same sort of thing. We had one notion of a sum (which has in fact already been re-defined many times already by this point, if you were paying attention), based on convergence. But using that definition, we find that there are some sequences that we just can’t sum. We can, however, come up with a new sort of summation, that gives the same results that the old sort did (whenever the old sort worked at all), and which also lets us sum other things, all while maintaining many of the useful properties of sums.

Thanks, that was very clear. But where did this sum come from? I’m supposing you made an error searching for a good example on the fly (and thanks again), but maybe I’m missing something?

-1/2 is correct, but understanding why requires some heavy-duty math (mostly beyond me). There is a function, the Riemann Zeta function, that goes like this:
R(s) = 1/1[sup]s[/sup] + 1/2[sup]s[/sup] + 1/3[sup]s[/sup] + …

Clearly, if you plug in s=0, you get 1+1+1+… But you can’t do that directly, or you get infinity. So there’s a technique called analytic continuation that allows you to patch things up to give a sensible answer for nearby points. That technique gives an answer of -1/2 for R(0).

Ah, okies.

I saw way up:

1-1+1-1… = 1/2 (I thought this was the series you “meant” to propose)

so now we also have:

1+1+1+1 = -1/2
And I can’t add them (giving 2+0+2+0…=0 ie 2(1+1+1+1…) = 0 etc.) becaue 1+1+1+1… isn’t stable?

Which only proves my point: such constructions are not well-formed, because they include these kinds of ambiguities. My construction is equally valid as your is. Even other constructions are possible, with more and more absurd results.

Believe me, I don’t! I was participating in the game of showing the contradictions that ensue if you make the mistake of thinking of them as literal sums.

One might even claim that it’s the most fundamental thing that happens in math. Even something like negative numbers is pretty unintuitive. How can you have negative three apples? You can’t, but it turns out to be a useful concept. The object you get from adding negative numbers to your number system work the same way as before, but in a bunch more situations that you couldn’t previously handle, like being in apple debt.

So there was a progression from the natural numbers to the whole numbers to the integers to the rationals to the irrationals to complex numbers, and each time they worked basically the same as before, but with greater power. The same thing is going on here, where we can assign a number to some series and we can (partly) sling them around like the original series.

One can go even farther; for instance, matrices behave much like ordinary numbers. We can multiply them, add them, scale them, and for the most part they work the same way (with some exceptions: for instance, AB does not equal BA). We have to keep the exceptions in mind but overall the abstraction is very useful.

And if you’re willing to give up even more properties, you can treat things like “operations on a Rubik’s cube” a lot like a number. Multiplying two of them is like composing the operations. You can invert an operation, and there is an identity element (like 1 is the identity for multiplication), and so on. Again, you have to know where the limits lie, but being able to use basic algebra on this stuff is extremely useful.

Exactly so! The “sum” can’t be made explicit. It can be re-phrased in lots of ways, each with a different value. What we’re really doing is subtracting infinity from itself, which is an undefined operation.

(Ditto for dividing infinity by itself.)

Right. Well, sorta. You can add them, but you can’t then deduce that (2+0+2+…) = 2*(1+1+1+…).

In fact, this is where my own limit lies. I think you need a property greater than stability to make that transformation. You haven’t just stripped off a few elements from the beginning–you’ve ripped out an infinite number of elements, with infinite indices. That’s not generally allowed.

In fact, any kind of skipping or element grouping gets you into trouble quickly. For instance:
1 - 1 + 1 - 1 + … =
(1 - 1) + (1 - 1) + … = 0?
1 + (-1 + 1) + (-1 + 1) + … = 1?

So I think that’s just not allowed for divergent series… or maybe there’s a property that allows it under very specific circumstances. Stability just allows you to add/subtract a finite number of elements to the beginning of your list.

There are only contradictions if you break the rules… and you broke the rules. It’s no different than if you had “proven” that the real numbers are inconsistent because you divided by 0 and showed 1=2 as a result.

Whether you think of the sums as real or not is irrelevant–I suggested it only so you can get over the hurdle of thinking that to have a sum, you must somehow add all the numbers together. That’s the thing you need to give up. It’s not like you were able to do that anyway, even with convergent sums (which are generally defined as a limit of the partial sums as the count approaches infinity).

Chronos’ example is a great one. When you take x[sup]2.5[/sup], you aren’t literally multiplying a number by itself 2.5 times, even though that’s the intuitive way it works. Mathematicians figured out how to generalize the operation so that it gives the same answer as the intuitive version, but it works on a bunch of new cases as well. You give up some intuition, but who cares? You have gained a lot of new abilities and lost nothing.

Um… Yes and no. I added 1 to every term of an infinite series…and that isn’t usual. There’s no standard mathematical way of doing that.

I also subtracted infinity from infinity, and that’s also not defined.

But that was my whole point: if you do this, you can get any answer you want. And that is the contradiction. Any time anyone subtracts infinity from infinity, consistency goes out the window.

The joy of the contradiction was adding an infinite number of 1’s to a pseudo-sum, and getting a result smaller than the original.

Yes, there is–I described it already. But the other thing you made use of was the stability property, which isn’t valid for 1+1+1+… So you broke the rules.

If you like, a challenge:
Take any geometric series you want–that is, ones of the form 1+x+x^2+…–with the exception of x=1 (x=1 corresponds to 1/(1-1)=1/0, which is obviously not allowed). 1-1+1-1+…, 1+2+4+…, 1-2+4-8+…, and so on are all allowed, and sum to 1/(1-x).

Now, multiply, add, subtract, and shift these around at will. These series have linear and stable sums, so you’re allowed to do this. The only thing you’re not allowed to do is insert/delete an infinite number of elements, so 1+2+4+8+… = 1+0+2+4+0+8+… is disallowed.

Using these rules, find a contradiction. That is, find a case where performing the same manipulation on the sequences and the sums gives you a contradictory result, like 1=2.

If you’re right that you can get any answer you want, then this should not be hard.

I should point out that 1+2+3+… is not a stable series. Easy to see why: if it were, you could just take (1+2+3+…) - (0+1+2+…) = 1+1+1+… But we already know that isn’t stable.

The reason is that The Great Unwashed pulled a bit of slight-of-hand in post #21. Specifically, this bit:
Which gives S - 1/4 = 0 + 4 + 0 + 8 + 0 + 12…
equivalent to S - 1/4 = 4(1 + 2 + 3 + 4…)

He deleted an infinite number of zeroes there, which isn’t really allowed. It turns out that in this case, it’s legit, but there are formal reasons behind this particular transformation. It doesn’t work in general, and even here is has the effect of combining two stable series into an unstable one. So although 1+2+3+…=-1/12 is correct, using that result in further manipulations requiring stability will get you in trouble.

I think you are deliberately being unpleasant. I have specified that I can get any answer I want by subtracting infinity from infinity.

If that can’t happen in your proposed challenge, then it does not involve subtracting infinity from infinity.

That’s all I’ve said. You seem to have invented some different opinion for me, but that’s just straw-man folly, and does not bear any resemblance to anything I have actually said.

If you’re willing to teach, I would like to know what “stable” means, as I have never encountered that property in my math education.

Take a look at post #42, and let me know if you have further questions. It’s basically the ability to shift your sequence left and right, as we’ve been doing above. So because 1+2+4+… is stable, we’re allowed to do this:
1+2+4+8+… = -1
0+1+2+4+… = -1 (right-shift by one term, by adding a zero in front)
subtract and get:
1+1+2+4+… = 0
1+(-1) = 0

(and we see it all worked out, since 0=0)

I said above that this is “obviously” true for convergent sequences, but in reality it’s not that obvious. It is true that, for instance, 1+1/2+1/4+… = 1+(1/2+1/4+1/8+…) = 1 + 1/2(1+1/2+1/4+…) = 1+1 = 2, etc., but why are we allowed to shift around an infinite number of terms? There’s lots of stuff that works a finite number of times, but breaks when you do it infinite times, like 0+0+0+… or 111*…

For convergent series, stability may be called the “shift rule” for obvious reasons. Again, the idea is that you’re allowed to chop off (or add) a finite number of elements from the beginning, and the sum still works if you account for them properly. You can find a proof of the shift rule for convergent series on page 31 here. So, it turns out that it is always legal for convergent series, but it takes at least some care for its legitimacy. You can’t just assume it to be true without a proof.

No, it isn’t, because you can actually reach Ghana.

It’s hard to get a clear idea of what you’re saying, because no one is proposing subtracting infinity from infinity. The proposal is subtracting finite values from finite values. We are calling these values the “sum” of divergent series, but that is not to imply they are somehow the same as infinity just because the partial sums tend toward infinity.

That said–yes, you are able to subtract a divergent sequence from a divergent sequence and get a sensible, specific answer. There are rules one must follow for it to work out, but if you follow the rules you will get consistent results. Does this mean, by your standard, that I am “subtracting infinity from infinity”?

Let me try to simplify the challenge. I give you this sum:
1+2+4+… = -1

Here is what you are allowed to do (all operations must happen to both sides of the equation, of course):

  • Add (or subtract) any finite number of terms to the beginning of the sequence
  • Multiply each term of the sequence by a constant factor
  • Add or subtract two sequences term-by-term (of course, subtraction is just addition once you’ve multiplied by -1)
  • Any time you see the left-hand of the equation above, you may replace it with the right (and vice versa). Likewise with any derived sequences.

So just playing around, we start with:
1+2+4+… = -1
Add some terms:
4+2+1+2+4+… = 4+2-1
Multiply by 2:
8+4+2+4+8+… = 8+4-2
Create a new sequence from the original:
0+1+2+4+8+… = 0-1
Subtract the last two:
8+3+0+0+0+… = (8+4-2)-(0-1)
Simplify:
11 = 11

So despite my manipulations it all still worked out. Note that in the above, I did subtract one divergent sequence from another. It’s your call whether to say that’s subtracting infinity from infinity. Can you find some other set of manipulations–using only the rules I laid out–that gets you, as you said, “any answer I want”?