First, the proof from that previous thread is fallacious, so any corollary of it will also be. Second, the only thing that made that proof remotely believable was the use of an infinite series, but you are using a finite series. You correctly show the final terms of T and nT being different, but ignore that in the subtraction step.
If k is a finite number, your third line is incorrect, because T does not include the term n^(k+1) but 1+nT does.
If you are intending for these to be infinite series, you’re assuming your series actually have a sum, which may or may not be the case. For series that actually do converge, your result is correct. What you have is a geometric series—see here for a simple introduction or Wikipedia for more detail.
Greg, I’m guessing you know much more about these things than I do, but upon further googling, the wiki on this series says that Euler summed it out to the same answer, but in the spirit of overcoming ignorance, I’d appreciate hearing where I’m misreading.
Rain quite right, so let’s say n>1
Thudlow, I did intend this to be an infinite series, yes. Is the problem that I specified the +n^k and +n^k+1? I see in the wiki I linked to above, that the series is listed as 1+y+y^2+y^3+… so maybe my problem was just my misunderstanding of how to write the series.
So is this a step closer?
For n>1
T = n^0+n^1+n^2+n^3…
nT=n^1+n^2+n^3…
T=1+nT
T-nT=1
(1-n)T=1
T=1/(1-n)
Cad I think writing the series correctly alleviates the problem you found.
Thank you for your patience, guys! Overcoming ignorance and all that.
The manipulations you’re performing aren’t necessarily valid on infinite series, at least not those that don’t converge.
The standard way of handling this is to work with a finite series, up to the first k terms, and then take the limit as k approaches infinity. See here or the Wikipedia article I linked earlier for how it would be dealt with in a standard Calculus textbook.
LOL! So not only is it wrong, but I don’t know enough about math to figure out why it’s wrong! Thank you guys for taking the time; I’ll let it go for now and revisit it when I know more about series.
I wouldn’t exactly say that it’s wrong, just that it’s based on assumptions that aren’t always warranted, so that your result isn’t universally valid.
Playing fast and loose with infinite series without worrying too much about convergence puts you in good company (e.g. the great Euler), but that was what people did back in the lawless, free-wheeling, Wild-West early days of Calculus.
As mentioned by others, the series you are talking about diverges in most cases, leaving it without a sum as it is most commonly defined. However, that in itself is not necessarily a good reason to stop looking at what how these sums work out, as many in this thread have claimed.
There is, of course, the famous equation (well, famous as far as equations go) of 1 + 2 + 3 + 4 + . . . = − 1/12. Now, in the traditional sense, the sum of all the natural numbers increases without bound, and so it diverges. Nonetheless, you can manipulate the equation in meaningful ways. For example, as the Wikipedia notes, 1 + 2 + 3 + 4 + . . . = − 1/12 has applications to various areas of physics.
A special form of your equation is the case where n = 2, that is 1 + 2 + 4 + 8 + . . ., which as your formula shows does sum to − 1. As far as your general formula goes, I do believe that it is correct, at least under certain conditions, but I admit I am not certain.
It works for n–>1, if you treat it as a limit problem. Since the limit is of the 0/0 form, use L’Hospital’s rule to differntiate the numerator and the denominator, then you have:
(Lim n–>1) T = (k+1)n^k /1 = k+1
It’s trivial to see that it is the sum of the series above when n=1
Any question about Ramanujan summation of divergent series is going to get a bunch of “no, you’re wrong, it diverges” answers. You are probably better off asking it on a pure mathematical forum.
In the sense in which 1 + 2 + 4 + 8 + 16 + … = -1, it is indeed the case that, more generally, we have 1 + n + n^2 + n^3 + n^4 + … = 1/(1 - n). This can be interpreted in the “n-adics” for integer n, or at any rate as a kind of generalized Abel summation (what I called “overflow summation” in this post). [FWIW, this kind of summation is much, much “tamer” than the kind of summation needed to sum 1 + 2 + 3 + 4 + 5 + … = -1/12.]
I would say everyone who has called this “wrong” simpliciter is overzealous in doing so. It’s true that this series doesn’t converge, in the sense of the initial discretely cut-off partial finite sums of an infinite series getting closer and closer to some particular value, unless |n| < 1. But so what? Such limits of partial sums are only one sense, and not the God-given only sense, in which we can interpret the meaning of an infinite series. Just as such limits are proposed as usefully being considered “the sum” for some purposes because their behavior is analogous to that of more traditional sums in more familiar contexts, so as well is it the case that other approaches to infinite series valuation, such as the aforementioned “overflow summation”, can be usefully considered “the sum” for some purposes because their behavior is analogous to that of more traditional sums in more familiar contexts.
(For that matter, in the n-adic topology, where the values we work with are base n decimals extending infinitely far to the left instead of to the right and the notion of convergence is given by convergence of such digits, the partial sums of this series actually DO get closer and closer to a limiting value which acts as 1/(1 - n). But I’ll leave that be for now.)
Frankly, I hate it when a layperson someone stumbles upon something genuine and interesting and lovely in math all on their own, as happened in the OP, and everyone rushes to tell them how they are wrong and their discovery worthless because it doesn’t conform to the perspective (often, simply the formal language) the scolders have erroneously taken from their overly rigid textbooks to be the only useful way to talk or think.
Or, rather, I love the first part and hate the second part.
Is this because of the initial n^0 term? That n^0 should be taken as meaning 1 unconditionally, as the nature of the reasoning in the proof indicates. [FWIW, yes, 0^0 can be thought of as indeterminate in contexts where the exponent is thought of as a continuous quantity, so that one feels a compulsion for 0^0 to be close to 0^e for arbitrarily tiny e, but if the exponent is treated as a discrete quantity, it is almost always correct to think of 0^0 as unambiguously 1]
Of course, cutting off the infinite series at some large but finite point won’t generally give you a value anywhere close to 1/(1 - n), but you already knew that from the 1 + 2 + 4 + 8 + 16 + … = -1 example.
Incidentally, I’d say this works not only “for n > 1”, but more generally “for n not equal to 1”; in particular, this works just as well with fractional n. [It also sort of works for n = 1 in that 1 + 1 + 1 + 1 + … and 1/0 might both be considered as equally representing a particular kind of blowing up…]
Of note, it IS the case that, for fractional |n| < 1, cutting off the infinite series at a large but finite point will give you a value close to 1/(1 - n). For example, if we take n to be 0.1, then T = 1 + 0.1 + 0.01 + 0.001 + …, and cutting it off at some finite point gives us 1.111… with finitely many 1s, which will be very close to 1/(1 - 0.1) = 10/9 = 1.111… with infinitely many 1s.
Indeed; the people scoffing generally recognize at least a dozen different notions of summation as “valid”, and yet balk on this one. If “sum” had one and only one meaning, you wouldn’t be able to get anywhere past the sum of two (and exactly two) natural numbers.