If one properly defines “posts” there will be exactly -1/12.
The computer architecture is a consequence of the architecture of the mathematics the computer is modeling.
No, I’m with friedo on this one. Considered as mathematical objects (as opposed to computer objects), the way numbers are represented inside the computer isn’t relevant to what those numbers actually are.
Yeah, I’m having trouble coming up with a definition of R that is not satisfied by the subset of C with imaginary part equal to zero.
There certainly is a subset of the complex numbers which is isomorphic, in terms of its finitary additive structure, to the real numbers. Whether that isomorphism is actual identity or something else is the sort of question I am inclined to consider largely mathematically meaningless.
Regardless, there are plenty of additive structures in mathematics which do not fit neatly into some linear inclusion tower; even if you were inclined to think of integers, rationals, and reals as all just special kinds of complex numbers, there are also such things as integers modulo n, p-adic numbers for various p, vectors within spaces of various dimensions, strings under concatenation, subsets under inclusive OR, abelian groups up to isomorphism under direct sum, oriented knots under connected sum, and on and on we go, none of these being natural to view as living together with complex numbers as subtypes of some universal datatype on which summation is defined once and for all, never to be generalized.
Well, supposing you define the ordinals as transitive hereditarily well-founded sets of sets of sets… (with appropriate notion of ordering, addition, and multiplication), and the natural numbers as finite ordinals, and the ordered pair (a, b) as the set of sets {{a}, {a, b}}, and the integers as the equivalence classes of the ordered pairs of naturals modulo the relation (a, b) = (c, d) iff a + d = b + c (with appropriate notion of ordering, addition, and multiplication), and the rationals as the equivalence classes of the ordered pairs of an integer and a nonzero integer modulo the relation (a, b) = (c, d) iff a * d = b * c (with appropriate notion of…), and the reals as ordered pairs of inhabited sets of rationals (L, R) such that L consists of precisely the lower bounds to R and R consists of precisely the upper bounds to L, and the complex numbers as ordered pairs of reals…
Well, then, you will find that your definition of the reals is as particular sets of sets of sets…, that your definition of the complex numbers is also as particular sets of sets of sets…, and that your definitions ensure that no real number is also a complex number. The complex numbers with zero second component would be distinct from, although isomorphic to, the real numbers.
But surely no one would be so misguided as to think this way… ![]()
Are you sure? Because some of those spaces of “numbers” so defined do have some overlap: For instance, the ordinal 1 is equal to the ordered pair (0,0).
Which also means that you have to take care to specify the types of the input to your + operator, as with the computer.
Yes, I made sure: On the definitions of that post, a real number is an ordered pair of countably infinite sets, while a complex number is an ordered pair of 2-element sets.
I call BS on the demonstration.
Even the 1-1+1-1+1-1 is more of a simplification than a sum, that value, 1/2 is never a result.
The sum of natural numbers, as many have said, never reaches -1/12, cannot reach that number in any sense. If you have to say Rhiemann or Cesaro for a series that can be solved with a high school formula, then, it’s a high-level parlor trick. a fancy 1=0.
What is the high school formula that solves the sum of the natural numbers? I know of no high school formula that defines infinity.
I read what Indistinguishable posted, but had a hard time following some of it. I haven’t done limits since Calculus in high school and was hoping someone could explain it in a simple way like in the video. Obviously, mathematicians are a lot more knowledgable than me on this topic, so I must be wrong somewhere.
(But c’mon, admit it, this is just an early April Fools’ prank by mathematicians, right!?)
Keep in mind that the whole point of this demonstration is that unlike finite arithmetic, when it comes to infinite sums, there are many different techniques that in some cases agree and in others provide wildly different answers. Which one of these answers is “correct” is context-dependent. Obviously the -1/2 result is unintuitive and makes no sense from the perspective of finite arithmetic. But that’s just the result of applying one summation method to the series of natural numbers. And as pointed out above, these types of sums are used in physics (although that stuff is way beyond my knowledge of the subject, which is pretty limited to early undergrad-level stuff from a billion years ago.)
Following up on that, let me make one thing clear if it hasn’t yet been made clear in this thread: there’s nothing wrong with saying “1 + 2 + 3 + 4 + … sums up to positive infinity”. That’s a perfectly fine, intuitive, ubiquitously useful notion of summation to consider. But it’s not the only notion of summation we can consider, and my only goal has been to illustrate some of the other notions of summation available to think about.
I was dismayed when I saw in the Numberphile video that the poor cameraman who is inclined to think of the sum as infinite is portrayed as in foolish error; he is not in error. He is just talking about a different sense of summation than the one on which 1 + 2 + 3 + 4 + … = -1/12.
And similarly, Monocracy, you are not in error if you note that there is an obvious sense in which 1 + 2 + 3 + 4 + … is positively infinite. That is correct. There just happen to also be other senses of summation which we can think about; the only error would be to deny the exploration of these other notions of summation, or the resemblances connecting them to the whole web of notions already thought of as deserving the name “summation”.
Amen.
∑ai (1 to n) = (n/2)(a1 + an)
Without even having to define infinity it is evident that as n gets higher the sum gets bigger.
**Indistinguisahble **has a very good explanation that covers the “obvious” answer, i.e. infinite, and the more technical, less obvious posible answers.
Indistinguishable’s more technical answers also explain why the use of “evident” in your statement is as wrong as the notion that one does not always have to define everything in math. High school algebra does not “define” infinite sums; it assumes the “obvious.”
That’s the point that so many people in this thread have been trying to make. I think you have reached that point, but your first sentence allows room for doubt.
According to The Prince of Mathematics, Friedrich Gauss, summation from 1 to n is
n(n+1)
2
So, if
n(n+1)=-1
2…12
does n= some new fundamental constant?
You’re asking what ∞/2 is. That’s not really a meaningful expression.
Enola Straight, here’s a generalization of the result you mention:
Consider the function D(n, p, x) defined recursively by the base case D(0, p, x) = x[sup]p[/sup] and the recurrence relation D(n + 1, p, x) = D(n, p, x + 1) - D(n, p, x). [Note that, if p is a natural number, then D(n, p, x) is a polynomial function of x of degree p - n, becoming constantly zero once n > p]
Next, define B(p, x) as the alternating sum D(0, p, x)/1 - D(1, p, x)/2 + D(2, p, x)/3 - D(3, p, x)/4 + … [Again, note that if p is a natural number, then B(p, x) is a polynomial function of x of degree p, the terms in this series becoming constantly zero after the first p + 1 of them]
Then we have that the sum of x[sup]p[/sup] as x ranges through the interval [a, b) is (B(p + 1, b) - B(p + 1, a))/(p + 1).
This gives us a general formula for sums of p-th powers, for any exponent p other than -1.
In particular, when p < -1 so that the infinite series 1[sup]p[/sup] + 2[sup]p[/sup] + 3[sup]p[/sup] + … actually converges in the standard sense, we find that B(p + 1, x) goes to zero as x goes to ∞, so that this sum comes out to (B(p + 1, ∞) - B(p + 1, 1))/(p + 1) = -B(p + 1, 1)/(p + 1).
If we were to continue using this same logic for p = 1, despite the lack of convergence in the standard sense, then we would conclude that the sum 1 + 2 + 3 + 4 + … = -B(2, 1)/2.
As it happens, B(2, x) = x[sup]2[/sup] - x + 1/6 [apart from the constant term, this is the observation fancifully, though likely apocryphally, attributed to Gauss as a schoolchild]. Thus, -B(2, 1)/2 = -1/12, just as expected.
Rather than writing
1 - 2 + 3 - 4 + 5 - 6 + 7 - … = 1/4
with no context, it would be clearer to write
limit ( r - 2r^2 + 3r^3 - 4r^4 + 5r^5 - 6r^6 + 7r^7 - …) = 1/4
where the limit is taken as r = .9, .99, .999, .9999, .99999, …
The individual sums are all convergent here; that they are asymptotic to 1/4 seems exciting enough without resorting to any divergent summation.
And those who don’t like 1+2+3+4+… = -1/12 can take comfort from the fact that some Nobel Prize winners weren’t fully happy about it either:
[QUOTE=Richard Feynman]
The shell game that we play … is technically called ‘renormalization’. But no matter how clever the word, it is still what I would call a dippy process! Having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent. It’s surprising that the theory still hasn’t been proved self-consistent one way or the other by now; I suspect that renormalization is not mathematically legitimate.
[/QUOTE]