It has nothing to do with negative and corresponding positive divergent infinities (after all, x(x + 1)/2 is positive both below -1 and above 0; there’s no negative infinity around, as such). It just has to do roundabout effects of going from -1 to 0.
To see this, let’s start by supposing you have some integrand g which you wish to think of as having a finite integral T over the range from some starting point Start all the way to positive infinity. If we define f(i) as the integral of g from i to i + 1, then this means T = f(Start) + f(Start + 1) + f(Start + 2) + … .
We could just as well break T down into the integral of g from Start to any intermediate point, plus the integral of g from that intermediate point to infinity; that is, writing g[a, b) to mean the integral of g from a to b, we have that T = g[Start, ∞) = g[Start, x) + g[x, ∞) for any x.
Note that g[x, ∞) = f(x) + f(x + 1) + f(x + 2) + …, ad infinitum. And note that g[Start, Start + n) = f(Start) + f(Start + 1) + …, with n many terms, for any natural number n. [Keep in mind, as the notation indicates, that the summation yielding g[x, ∞) does include f(x), but the summation yielding g[Start, Start + n) does not include f(Start + n)]
Thus, we have a basic relationship between a function giving the sum of g over finite ranges and a function giving the sum of g over infinite ranges: that g[x, ∞) = -g[Start, x) + g[Start, ∞).
That is, once you know the behavior of g[Start, x), you know almost everything about the behavior of g[x, ∞) as well; all that’s missing is knowledge of the additive constant g[Start, ∞).
And how might you determine the value of that constant?
Well, consider now the question of integrating g[x, ∞) = f(x) + f(x + 1) + f(x + 2) + … . If f has an antiderivative F such that the series F(x) + F(x + 1) + F(x + 2) + … is also to be thought of as having a well-defined sum, then this series would be the integral of g[x, ∞). In particular, the integral of g[x, ∞) as x goes from 0 to 1 will be [F(1) + F(2) + F(3) + …] - [F(0) + F(1) + F(2) + …] = -F(0). [There’s nothing special about 0 here; it’s just often a convenient value to look at]
Accordingly, taking the average value of g[Start, ∞) = g[Start, x) + g[x, ∞) as x goes from 0 to 1, we find that g[Start, ∞) = (the average value of g[Start, x) as x goes from 0 to 1) - F(0). Plugging this back in, we find that g[x, ∞) = -g[Start, x) + (the average value of g[Start, x) as x goes from 0 to 1) - F(0).
In particular, looking at x = Start, we see that g[Start, ∞) will be (the average value of g[Start, x) as x goes from 0 to 1) - F(0). [The -g[Start, Start) term disappears, as this is an empty summation].
This is the relationship you were noting before: if f(i) is i, then we can take F(i) to be i[sup]2[/sup]/2, so that F(0) = 0 and is thus ignorable, and we can also take g[1, x) to be h(x - 1) where h(x) = x(x + 1)/2. Then the average value of g[1, x) from 0 to 1 will be the integral of h from -1 to 0, and this will be the value g[1, ∞) as well.
Note that, to make use of this approach, we have to do three things once we’ve pinned down the series we’re interested in calculating as f(Start) + f(Start + 1) + f(Start + 2) + … for: we have to determine the general values of f on arbitrary inputs [not just inputs of the form Start + n for natural number n], we then have to find a g whose integral from i to i + 1 is f(i) [even though such g are only determined up to addition of the derivative of any function of period 1], and we have to determine F(0) where F is an antiderivative of f [even though antiderivatives are only defined up to an additive constant].
The trick that makes this workable, despite the apparent non-uniqueness of f, g, and F, is that we actually have some further appropriateness conditions: the g we pick has to be one we can regard as having a well-defined finite integral from Start to ∞, and similarly, the F we pick has to be one for which we can regard F(x) + F(x + 1) + F(x + 2) + … as having a well-defined finite value for general x. These conditions will suitably constrain us to the correct answer for the cases of interest.
In particular, if f can be taken to be a polynomial, there will also be unique polynomials g and F that do the trick for it in the context of our “zeta summation” [F will be the antiderivative taking value 0 at input 0, while g is obtained by suitable use of Bernoulli polynomials]; this should make it straightforward to calculate the zeta summation of any polynomial series, though the concept makes sense even for more general series than those.