Just stepping in to link to the last time this got discussed here (which has a link to an earlier thread as well).
Returning to summing infinite series, one nice result which somehow hasn’t been mentioned yet (though it is immediate from Binet’s formula) is that the sum of the Fibonacci series is -1:
0 + 1 + 1 + 2 + 3 + ...
+
1 + 1 + 2 + 3 + 5 + ...
-----------------------
1 + 2 + 3 + 5 + 8 + ...
Thus, letting F be the Fibonacci series, we have that F + F = F - 1, and so F = -1.
This result is very robust, in the same way as 1 + 2 + 4 + 8 + … = -1 and 1 - 2 + 3 - 4 + … = -1/4 are (and pretty much every series in this thread except the most contentious 1 + 1 + 1 + … and 1 + 2 + 3 + 4 + …). There was some concern earlier about the inconsistencies of applying basic grade school arithmetic to infinite series, but we can lay out some very simple consistent rules to derive these robust results:
First, some notation: when I write 1 + 5 + 3 + 4 + …, it may not be clear what series I mean, exactly; perhaps its first term is 1 and its second term is 5, or perhaps its first term is 1 + 5 and its second term is 3, for example. So, instead, let’s represent series as though they were giant polynomials: 1 + 5X + 3X[sup]2[/sup] + 4[sup]X[/sup]3[/sup] + …, for example, would unambiguously have first term 1 and second term 5, while (1 + 5) + 3X + 4X[sup]2[/sup] + … would unambiguously have first term 1 + 5 and second term 3. In general, the coefficient of X[sup]n[/sup] represents the nth term of the series, and the sum of the series is the value of this nominal polynomial at 1. As an added bonus, this notation makes it obvious not only what it means to add series termwise, but also what it should mean to multiply series.
Ok, now we want a partial function Sum() which assigns sums to some (not necessarily all) series. Let’s assume the following minimal rules:
A) Sum does just the ordinary thing on finite series
B) Sum(P + Q) = Sum§ + Sum(Q), whenever the right hand side is defined
C) Sum(P * Q) = Sum§ * Sum(Q), whenever the right hand side is defined
D) Sum§ = Sum(P * Q)/Sum(Q), whenever the right hand side is defined
From these rules, you can’t derive any contradiction. In fact, they’re so minimal that, in themselves, they don’t even assign values to many series which converge in the standard sense (specifically, they assign values to precisely those series which can be specified by linear recurrence relations; equivalently, those polynomials representing rational functions). However, they are enough to derive all the “robust” results in this thread; for example, the argument for the Fibonacci result above can be rephrased into (X[sup]2[/sup] + X - 1)F = - 1, from which applying A) and D) yields that Sum(F) = -1.
You won’t, however, be able to use this to derive values for Sum(1 + X + X[sup]2[/sup] + …) or Sum(1 + 2X + 3X[sup]2[/sup] + …). The reasoning which gave those values used an additional principle, that a series can be interleaved with zeros without changing its sum (which can be rephrased in this notation as that Sum(P(X)) = Sum(P(X[sup]2[/sup]))). Unfortunately, naively adding that to the above will indeed produce an inconsistent system.
Typo fixing:
Should be positive 1/4.
Broken coding; the last term should of course be 4X[sup]3[/sup]
In my neck of the woods, if F is the sum of the Fibonacci series, then F = infinity, and therefore your equation above (F + F = F -1) becomes
infinity + infinity = infinity - 1
which leads to
infinity = infinity
(since infinity + infinity = infinity and infinity -1 = infinity)
which leads to nothing new.
Declaring F to be some number and then doing “basic math” on it should not pass muster. Aren’t there lots of tricks like this that treat infinity or zero as a regular number and end up proving anything you want, like 1 = 0? (e.g. by dividing by some x where x turns out to be zero).
Of course, you guys seem to be professional mathematicians and it’s ludicrous for non-mathematicians to go up against what the mathematics field seems to have accepted, but I still dislike your manipulations and conclusions above.
I just think the generalization of the “sum” function mentioned in this thread is not an appealing one. Of all the generalizations that have been mentioned in this thread, the complex numbers, exponentiation, cosines, etc, all have practical applications (in engineering, in designing communications systems, cell phones, etc). Is there any, or can you envision any, practical application that uses the “fact” that 1 + 2 + 3 + 4 + … = -1/12?
Of course, a theoretical mathematician does not care about practical applications of math theorems, but what I’m trying to show is that some generalizations make sense, in the sense that reality agrees with them, in the sense that we can design gadgets that use these generalizations, and these gadgets work in the real world. Some generalizations are just so out there and so antithetical to reality, that they may be nice toys for mathematicians to play with, but are useless for everyone else.
Someone mentioned the Banach–Tarski paradox in one of the linked threads. That is another example where the field of math has accepted a theorem that has nothing to do with reality.
I guess this leads to a larger question of what is the purpose of mathematics, which is beyond the scope of this thread.
My quick searches show that the theory of divergent series is useful in quantum mechanics and fluid dynamics, although I can’t comment on either of those in any detail. There are also applications within other areas of mathematics, such as in computing integrals. But that’s not what I find interesting here.
What I do find interesting is that this thread is a very interesting illustration of how people have learned to think at various points in their mathematical education. Recall the first time you encountered addition: you were probably told that you had some number of apples, and that you received some additional number, and were asked how many apples you now had. In this context, addition is easy to understand because it corresponds to your physical intuition. It’s pretty easy to extend that intuition to adding positive rational numbers, and then real numbers as well.
It’s very hard to envision what negative numbers mean in a context like this, so we introduce the number line. Adding a positive number corresponds to moving right, and adding a negative number corresponds to moving left. Again, the physical intuition is clear, and so addition is easy to understand in this context. We introduce a second dimension to get complex numbers, and now we have moves up and down as well as left and right, so we can still apply our physical intuitions to the addition of complex numbers.
So, it seems clear why no one has balked at assigning sums to convergent series: we have a good physical intuition for what it means to get closer and closer to something. It has to be stretched a little bit to say that, for instance, 1 + 1/2 + 1/4 + 1/8 + … = 2 because the partial sums never actually reach their limit, but most people don’t seem to struggle with this too much.
And that’s a large part of why people aren’t getting each other in this thread. There’s really no way of visualizing on a number line what it means to say that 1 + 2 + 3 + 4 + … = -1/12, and so those who are still relying on physical intuition to think about addition simply can’t conceive of what that would mean.
Now, let’s talk a little bit about how an undergrad math education works. In a class like linear algebra, which is generally the first proof-based class that people take, we start out by looking at ordered lists of real numbers, and the natural arithmetic operations on them. We then lay out a list of the essential properties of the relationship between these things and the arithmetic we’ve defined, and say that anything that satisfies those rules is a vector space. The very next thing we do is to show how other things like polynomials of bounded degree satisfy those rules, and are vector spaces as well.
This is what I and pretty much every other mathematician in this thread would agree is the essence of modern math: abstraction. We look at a bunch of things that are in some way similar, define what those similarities are, and determine what must be true of anything that is similar to the things we’ve already seen in the way that they are similar to each other. Different mathematicians will come up with different notions of similarity, and there will be competition, but eventually something will prove to be the most useful, and so that wins out.
That’s the angle that we’re coming at here. There are certain properties that something has to have in order to be called a method of summation, and anything with those properties is a method of summation. It just so happens that none of those properties imply that the sum of infinitely many positive integers is positive or an integer, but we’re willing to accept that because this list of properties generates a useful theory. (For those who are curious, the properties I’m referring to may be found here.)
Well, don’t get me wrong; of course we mathematicians also are interested in the notion of sum for which 1 + 1 + 2 + 3 + 5 + … = infinity. No one’s saying that’s not also a useful concept. We’re interested in all these different concepts. And we’re interested in the differences between these different concepts, and the similarities between these different concepts. There’s no conflict in any of this; of course there are different abstract patterns of behavior. This is true not just of math, but of all the systems we might like to model with math; of course not everything behaves the same way. So we build up a stable of knowledge about all kinds of different abstract systems, and when the time comes to model a phenomenon of interest, we select from among the tools at our disposal for the one most relevant; no need to keep just one and throw away the rest.
If you like, we can use different words. 1 blus 1 blus 2 blus 3 blus 5 blus … = infinity, and 1 quus 1 quus 2 quus 3 quus 5 quus… = -1. And the interesting thing is that both blus and quus are quite similar in many ways to the plus of ordinary finitary addition (1 plus 2 = 3). There are significant similarities and differences between all three of these. And that’s interesting material for study. And there’s no confusion if we use different words for all of them.
But, hey, why should we use the same terminology for what we conventionally call addition of natural numbers, addition of integers, addition of rational numbers, and addition of vectors? These are all again rather different concepts. The interesting thing is that there are similarities between all of these in addition to the significant differences between them. But we could, being very particular, use different words for all of them (and even further different words when we want to make abstract observations that apply equally well to more than one of these concepts (by virtue of their similarities)). And that would be fine, and there would be no confusion.
No confusion but a hell of a lot of obfuscation. Because we’re only human. It helps our intuition to speak of similar things in similar ways, when we want to draw out the similarities. And so we do, hopefully only in contexts where the conflation causes no confusion, where everyone knows how terms are being used and thus what particular meaning they are being given. Granted, that is not always the case; people do get confused, as evidenced in this very thread, and we should do a better job of preventing that.
I explicitly outlined in the last post a set of “basic math” rules for manipulating infinite series which are provably consistent; there is no danger that you will prove 1 = 0 from those rules, even though they do allow the derivation of finite totals for many series whose partial sums increase without finite bound.
Well, let’s take for example the rules I outlined in my last post: are these always the rules relevant to what one is doing? No, that of course depends on what one is doing. But certainly, they are a quite natural system of rules which can be useful in many cases. When I draw different conclusions from them then you would like me to, I’m not actually contradicting the statements you are more accustomed to; I’m just talking about something else. Just as sometimes it is reasonable to claim that a 360 degree turn is the same as a 720 degree turn and that x + y is necessarily at least as large as x, and sometimes it is reasonable to think of 360 degree turns and 720 degree turns as different, and to allow for x + y to be less than x. None of these positions contradict each other; they’re just talk about different things (when I’m thinking about how many objects are in a collection, I’ll probably want x + y to always to be at least as large as x; when I’m thinking about net distances travelled via certain movements, I probably want x + y to be able to cancel out to a smaller result than x).
Heh, what an oddly specific choice of applications; are you by any chance a cellphone engineer?
As for applications, there’s nothing preventing them; the results are true whether you use the word “quus” or the word “plus” or the symbol “+” to state them. It makes no difference to the applicability in modelling other phenomenon satisfying the same rules. As was mentioned above, the specific result 1 russ 2 russ 3 russ 4 russ … = -1/12 is used in string theory; I don’t know anything about physics myself, but Wikipedia says such zeta function regularization is extensively used in modelling the Casimir effect. As I believe mentioned above by Chronos, the robust result 1 quus 2 quus 4 quus 8 quus … = -1 corresponds to the manner in which computers use twos complement arithmetic to represent signed integers and the phenomenon of “overflow”.
(Of course, computer arithmetic is something we design, rather than something we go out and find, so one may object that this doesn’t count, that we should have to find an application in modelling pre-existent physical phenomena. But that would be an odd objection: being useful in designing things is still being useful. It’s not as though such applications are entirely sterile.)
And 1 quus 1 quus 2 quus 3 quus 5 quus 8 quus … = -1? Well, this is actually quite similar to the last one; you can also correspond this with overflow in computer arithmetic, using a certain natural system in which no numbers representation needs two consecutive 1s.
Of course, we can give physics-flavored accounts of these too (just via the fact that calculus itself is physics flavored); all the robust results are examples of reasonable uses of Taylor series outside their radii of convergence. Perhaps I’ll write out an illustrative story along these lines later.
The only reason you say reality agrees with some observations and not others is because you’ve picked a very particular way of joining mathematical results up with claims about the physical universe. But there’s more than one way to interpret the former as about the latter. The physical universe can’t contradict math, nor, for that matter, make math true; at bottom, every mathematical claim is “If you allow these rules, then you can use them to get such and such”. Physics can no more impinge on the truth of such claims than it can impinge on the legality of a diagonal move by a rook in chess.
Going back to the example from before, does “reality agree with” the claim that x + y is always at least as large as x? Well, it depends on what we’re talking about. Certainly, adopting new children does not cause my family to become smaller. On the other hand, it’s possible for one movement to take me far away from home and another movement to bring me back close to home. If I interpret the claim in a certain way as about the sizes of families under mergers, it’s true, and if I interpret the claim in a certain way as about the net distance travelled under various motions, it’s false. But there’s no reality of the matter as to whether + refers to the former or the latter; these are just different concepts, with different albeit in many ways similar abstract behavior.
Well, certainly, feel free to open a new thread.
I understand the reason behind you guys taking this route, but it seems (to me) that you are giving up way too much (in terms of properties of sums) in order to get the useful theory you speak of.
In addition to the properties you mention above (sum of infinitely many positive integers is not guaranteed to be either positive or an integer (!)), there is also the one I mention in post #95, i.e. you are giving up:
If
A = sum(x[k], k = 1...inf)
B = sum(y[k], k = 1...inf)
and x[k] >= y[k] forall k
then A >= B
Term-by-term, each term of the series in A can be larger than the corresponding term in series B, and yet, under divergent theory results, A can be smaller than B.
It seems bizarre that you would want a generalization of the “sum” function that has this property. I guess I would need to take a few math courses where the theory of divergent series is put to use, so I can better appreciate the rationale behind it.
I will close with two things:
[ol]
[li]There seem to be several divergent series summation methods, and they are not all consistent, so[/li][LIST=a]
[li]It’s a bit misleading to say that the sum of some divergent series is X. It would be more accurate to say that the Cesàro summation or the Abel summation or the Lindelöf summation, or whatever other summation method is used, of the series is X. This is especially true of course for divergent series which produce different results under different divergent series summation methods.[/li][li]I don’t think the other abstractions or generalizations mentioned in this thread suffer from this plurality of methods. That is, if I’m not mistaken, there is only one accepted method of extending exponentiation to the real numbers or to the complex numbers.[/li]
So, in that case, if someone says x^pi = y, it is accurate in the sense that yes, there is one accepted way of calculating x^pi, and for this x, the value is y.
But, in the case of sum(some divergent series) = y, it’s not really accurate to say that, since there isn’t just one accepted way of calculating sum(some divergent series).
[/ol]
[li]N. H. Abel himself said it best: “The divergent series are the invention of the devil, and it is a shame to base on them any demonstration whatsoever”[/li]
[/LIST]
One more thing: One example where I can see your point that some useful abstractions and generalizations often drop properties that we feel are important, is the concept of groups.
That is, with the exception of Abelian groups, it is not necessary that ab = ba, and this may jarring to some people the first time they encounter it, since they are used to commutativity in real numbers.
But of course group theory is very useful, even for groups that don’t have commutativity, and people learn to accept the loss of a property they once felt was important or “made sense”.
So, it is similarly jarring for people who encounter 1+2+3+4+…=-1/12 for the first time. Maybe people just need time to digest it.
I guess one difference in the above two examples is that the elements a and b are said to belong to some abstract “group”, and so we are more willing to forgo intuitive properties like commutativity. But 1+2+3+4+…=-1/12 (and other divergent series) apply to things we know from everyday life and basic math, like numbers, +, and =, and so it’s a bit tougher to accept.
[quote=“Polerius, post:127, topic:403329”]
I will close with two things:
[ol]
[li]There seem to be several divergent series summation methods, and they are not all consistent, so[/li][LIST=a]
[li]It’s a bit misleading to say that the sum of some divergent series is X. It would be more accurate to say that the Cesàro summation or the Abel summation or the Lindelöf summation, or whatever other summation method is used, of the series is X. This is especially true of course for divergent series which produce different results under different divergent series summation methods.[/ol][/list][/li][/quote]
This is true. But it’s as true of “standard” summation as all the other summation methods; none are somehow inherently the method we should intend by default. Ultimately, the question of what kind of language is misleading and what kind of elision of details is acceptable depends on the audience and the context. This is part of the point I was getting at about using different words above.
Oh, they absolutely do suffer from ambiguities, property-losses in generalization, and all the rest of it as summation does. For example, even on the standard account of complex exponentiation, (1 + i)[sup]i[/sup] can equally well be considered to have magnitude approximately 0.46 or 244.15 or 0.00085 (among infinitely many other possibilities).
And the standard account of complex exponentiation is just one concept with some nice properties; if we wanted to study another concept with similar but somewhat different properties, by all means, we are free to.
And you would have us give up some of the properties I mention in #122. But the thing is, we’re not actually really giving anything up. We’re saying “Look; this one concept has these properties and not those, and this other concept has those properties and not these. Both are interesting and both are a lot like all these other concepts. Let’s study them all!”
You are very fond of this order property. That’s fine. Study the things that preserve it. But don’t let that blind you to the fact that other abstract systems can behave differently.
Once upon a time, x + y >= x was paramount. “What kind of nonsense it would be, to by the process of adding more cause a quantity to become less?”. None could ever deny this principle. But nowadays, we hold that there are many interesting and useful systems where it is not always true (in the integers, for example), even though there are also many interesting and useful systems where it is always true (in the natural numbers, for example). And we study all of those systems. That’s just the same sort of thing that’s going on here.
The thing that’s hard to accept, but very illuminatory, is that everything works the same way as you described the concept of “group” working: there is no single fixed “numbers, +, and =”. It’s just like abstract groups. We can set out some rules for how we want “numbers”, “+”, and “=” to behave (which is to say, rules whose consequences we are interested in studying) and then look at all the structures that satisfy those rules. There won’t necessarily be just one, just like there are many different groups all satisfying the group rules. To claim something holds of all such structures is to derive from the rules. The applications to “real-world” phenomena are because some real-world phenomena follow the same rules/are instances of said structure as well.
People have this idea that there is a fixed account of what “number” means, a fixed hierarchy of natural numbers inside integers inside rationals inside reals inside complex numbers, all with the same basic compatible operations, and all ultimately living together. It is hardly so, and this is one of the most misleading things about the way mathematics is taught and discussed. There are many different systems of “numbers”, with different operations and properties, to be used for different purposes.
Of course, even if that system of representing numbers on a computer is man-made, it’s made that way for a reason, and it’s used because it simplifies a lot of things (for instance, it lets you use the same routines for adding numbers together regardless of whether the numbers are positive and negative, without having to include tests like “if(x< 0) { …”).
It’s just like in abstract math: We study particular sets of rules because those particular sets of rules have interesting properties. And sometimes, you’re interested in different properties than at other times, so you use different rules then.
Can you explain this? I get
How do you arrive at the other values?
In the x + y >= x case, it should be very straightforward to explain to a lay person with some basic math understanding that to understand why x + y can be less than x, all they need to do is consider that x and y represent money, and that when you add up your assets, if x is the amount of money you have and y is the amount of money you owe, your net worth is less than x.
How would you explain to someone that at every time step I give George and Paul some money, and in each of those time steps, I give George more money than to Paul, and yet, if I, George, and Paul live forever and this process continues forever, George will end up with less money than Paul. (Or if the time steps happen at an increasing rate such that the whole thing ends by 1pm, then by 1pm George will have less money than Paul, even though during each time instant before 1pm George had more money than Paul)
Not that the existence of an explanation to a lay person is an indication of the usefulness of an abstract math concept, but I just wanted to point out that the negative numbers in the example you gave aren’t that hard to grasp for someone who was never exposed to them.
(I wonder what a lay explanation of the imaginary numbers would look like)
It’s really, really easy and it’s a shame that we traditionally shroud them in other, less clear explanations first. They’re just rotation. See here.
The rotational component of 1 + i is by 1/8 of a complete revolution. That’s equivalent to 1 + 1/8 of a revolution, 2 + 1/8 of a revolution, -1 + 1/8 of a revolution, and the like as well.
The magnitude of x[sup]i[/sup] is exp(-the angle of x’s rotational component in radians) = exp(-2π * the number of revolutions in x’s rotational component). Plugging in 1/8 of a revolution gives 0.46, but plugging in 1 + 1/8 and -1 + 1/8 gives the alternatives I provided.
In other words, (1 + i) = exp(ln(2)/2 + 2πi(1/8 + n)), and thus (1 + i)[sup]i[/sup] = exp(ln(2)i/2 - 2π(1/8 + n)), for any integer n. The simplest representation of 1 + i as a potential base of complex exponentiation takes n to be 0, but this isn’t by any means unique.

How would you explain to someone that at every time step I give George and Paul some money, and in each of those time steps, I give George more money than to Paul, and yet, if I, George, and Paul live forever and this process continues forever, George will end up with less money than Paul. (Or if the time steps happen at an increasing rate such that the whole thing ends by 1pm, then by 1pm George will have less money than Paul, even though during each time instant before 1pm George had more money than Paul)
I wouldn’t. I wouldn’t use quus to model exchange of money at an increasing rate. Just as I wouldn’t use addition of three-dimensional vectors to model exchange of money. These concepts have their uses elsewhere.
For what it’s worth, I don’t think the “Think of money!” example is a very pedagogically satisfying one for making concrete the concept of negative numbers to those who aren’t familiar with them; after all, money is a game whose rules we’ve made up. One way I might do so would be to, in fact, introduce complex numbers first (though by a friendlier and more ordinary name like “scale-and-rotators”), and then take the negative numbers to be those whose rotational component was half a complete revolution (i.e., turning to face the opposite direction).
[quote=“Indistinguishable, post:62, topic:403329”]
Well, what does “one plus two plus three, etc.” mean to you? What does the addition of infinitely many terms mean to you, and should we still call that addition, when it is so different in many ways from the more familiar addition of just two things at a time? [Please, do actually answer this question; it’s not merely rhetorical, and I think having a layperson spell out their response to this would be instructive] …
My layman’s response:
Since it is physically impossible to add infinitely many terms, any sum is meaningless. It is fine to call it addition – impossible addition.
Ah, 2007! Life was sweet; Fannie Mae stock was a popular investment; Americans were interested in higher math.
But now, I must ask the Mods to quickly close and hide this thread.
Otherwise 1+2+3+4+… = -1/12 will soon appear on YouTube in proofs that the Federal Reserve’s quest to print an infinite amount of fiat money will soon end in disaster … or at least a twelfth of a disaster.
And BTW: If you guys could convince us that
1+2+3+4+… = -1/12
why could you never quite manage
.99999… = 1 ?
The latter sure seems more plausible to me.
Your two statements use different values of “us”.

My layman’s response:
Since it is physically impossible to add infinitely many terms, any sum is meaningless. It is fine to call it addition – impossible addition.
Question: why should a layman’s response be of any value in a technical discussion?
It’s not, in this case. In mathematics, infinite sums are a commonplace and numerous technical solutions exist for the appropriate contexts. Not understanding this simply cuts you out of all math and certainly of all math discussions.
I know this thread is old, but these issues keep reappearing. I just read a new book that is probably the best ever for explaining the concepts behind advanced math like the Riemann Zeta function and much more besides.
Visions of Infinity: The Great Mathematical Problems, by Ian Stewart. He walks the reader through a history of math, each one building the others to show how interrelated math is and how insights from a seemingly distant branch of math can be used to answer long-standing problems.
It is not light reading. Even though it’s for a lay reader with some but early college-level knowledge of math, I stopped trying to follow closely halfway through and let the concepts and his connections wash over me. But I’ve read entire popular books on subjects like the Zeta function and he did more in a chapter to help me understand than any book. Same with the recent proof of Fermat’s Last Theorem. He’s very good.
Getting back to the original question (er, five years after the fact, apparently), I don’t think I can do a better job of explaining the issue than the preivous posters without going into complex analysis, but here’s another shot at it. Consider the factorial function n! = n*(n-1)*… 1. We can define it inductively by setting n! = n * (n - 1)! for n > 0 and 0! = 1. That’s great for integers, but suppose I want to make some sense out of, say, (1/2)!. (“Some sense” is vague here. The motivation for considering 1 + 2 + … = -1/12 comes from renormalization in quantum field theory; as a simple series, it clearly diverges.) One of doing so is to use the gamma function Γ(z), which fortuitously satisfies Γ(z+1) = zΓ(z) and Γ(1)=1. That means that Γ(n+1) = n! for all integers n >= 0, and we can take z! = Γ(z+1) for arbitary z. (This definition is a bit arbitrary, but there’s no reason why it shouldn’t be.)
For z with positive real part, we can define Γ(z) by the integral on that wikipedia page, and integration by parts gives us the Γ(z+1) = zΓ(z) formula above. Unfortunately, the integral diverges for other z. Now we need two facts from complex analysis: differentiable functions are analytic, and analytic continuation is unique. (A function f is analytic at a point p if it has a convergent, well-defined Taylor series near p. Without getting into the details of analytic continuation, its uniqueness means that if we have two open (connected) domains U < V, then any analytic functions f, g with f = g on U also have f = g on V. In other words, there’s at most one way of extending f from U across all of V.)
We can use thus use the relation Γ(z+1) = zΓ(z) for Re(z) > 0 to define Γ for Re(z) > -1, Re(z) > -2, etc., eventually getting a definition of Γ that works for the entire complex plane, except for poles at z = 0, -1, -2, … (To see why, put z = 0 in the relation above.) Using that, we can say, in some vague sense, that for example (-1/2)! = Γ(1/2) = \sqrt{pi}. (That’s supposed to be the square root of pi.)
Why would we want to bother trying to make sense of z! for arbitrary z? You’ve probably seen a bunch of relations in combinatorics, analysis, etc. that involve factorials. In a lot of cases, these formulas still make sense and still hold if you extend them from integers to complex numbers in general. In some cases, because you’re now dealing with full functions rather than just a series of integers, you can use analysis and other stronger tools to solve your problems.
One example related to renormalization is related to dimensional regularization. Every single integral ever in quantum field theory diverges, and there’s a kind of black magic called renormalization in pretending that they converge and getting useful results out of them. The trick there is to replace 4-dimensional space (3 spatial dimensions + 1 time dimension) with d-dimensional space for arbitrary d, then take the limit as d -> 4. Of course, exactly what d-dimensional space means for d not an integer is unclear, and making sense of these integrals is tricky; but what it usually means in practice is that you compute the integral for integer d, replace terms like d! with Γ(d+1), and expand the result in a power series in 4-d.
Returning to the original question, the treatment for that sum is pretty much the same, just replacing Γ with the Riemann zeta function ζ. It’s also used for renormalization, in fact.
Question: why should a layman’s response be of any value in a technical discussion?
Exactly. This is something that requires a technical explanation. There’s nothing inherently magical about that explanation; anyone can read through the relevant textbooks, wikipedia pages, etc. and follow it with sufficient work. But as with most bits of theoretical math, there just is no simple, intuitive, layman’s explanation at all. (I’ve run into people who claim that everything can be explained as to a five-year-old; and if you can’t do so, then you don’t understand it yourself. I’ve never run into any other mathematicians who claim that.)