In case it isn’t clear, I of course do want to call -1 a number. But every reason to consider -1 a number is also a reason to consider ∞ a number, and conversely, every objection raised to ∞ as a number could just as well be raised to -1 as a number.
So is ∞. In fact, -1 is part of many different useful numerical systems, and so is ∞. Most notably, ∞ is part of both the affinely and projectively extended number systems, as well as the transfinite element of the cardinal numbers.
You can, for instance, have an infinite slope because slope is the ratio of rise to run. Without infinities, slope calculations (like composing billiards shots or plotting space junk trajectories) would be a pain in the ass.
Right. Or at least, most of the time, this is indeed how we use the term “area” (though sometimes, it is not. Cf. discussions of integration as related to area, where area on one side of an axis is countered with opposite sign from area on the other), and similarly for “length” and so on.
Well, that would be precisely the context where we would be inclining ourselve sto count area as negative…
I don’t understand the point of this statement. What’s wrong with considering a line just a particular degenerate case of a rectangle? That’s common enough in mathematics.
Here is an example of the sort of thing people will insist you must say when dealing with ∞:
"You said here that the integral from 0 to ∞ of 2[sup]-x[/sup] dx is equal to the difference in evaluating the antiderivative -2[sup]-x[/sup]/ln(2) at x = ∞ and at x = 0, and then plugged in -2[sup]-∞[/sup]/ln(2) - -2[sup]0[/sup]/ln(2) = (1 - 1/2[sup]∞[/sup])/ln(2) = (1 - 1/∞)/ln(2) = (1 - 0)/ln(2) = 1/ln(2).
But this is not correct! ∞ is not a number, so this reasoning is erroneous! You must never think of ∞ as a number, only as a limit.
Instead, the correct answer is this:
The integral from 0 to ∞ of 2[sup]-x[/sup] dx really means the limit of the integral 0 to r of 2[sup]-x[/sup] dx as r grows unboundedly large. This integral is equal to the difference in evaluating the antiderivative -2[sup]-x[/sup]/ln(2) at x = r and at x = 0, which comes out to -2[sup]-r[/sup]/ln(2) - -2[sup]0[/sup]/ln(2) = (1 - 1/2[sup]r[/sup])/ln(2). In the limit as r grows unboundedly large, 2[sup]r[/sup] also grows unboundedly large, and thus 1/2[sup]r[/sup] approaches a limit of zero, and so this overall approaches a limit of (1 - 0)/ln(2) = 1/ln(2). Thus, we see that the actual answer is… 1/ln(2)."
Here is the sort of thing the ancients would do in solving a problem like x^2 + 10x + 5 = 6x + 10
“We may first subtract 6x from both sides to get x^2 + 4x + 5 = 10. We may then subtract 5 from both sides to get x^2 + 4x = 5. At this point, we can no longer subtract any further from both sides. Now, recall we have a number of different quadratic formulae, depending on on which side of the = sign the terms of different degrees land. In this case, we will have to use the quadratic formula for the constant term being on one side and the other two terms being on the other side. The general solution to this particular kind of quadratic equation, x^2 + bx = c, is given by x = (sqrt(b^2 + 4c) - b)/2, so we will get that the unique actual solution is x = 1. Had we moved the bx to the other side, we would land instead in a different case of quadratic, with solution given by x = (sqrt(b^2 + 4c) + b)/2, with instead the solution x = 5, and so we might say in some sense there is a phantom ‘opposite-solution’ of x = 5 to our problem, though this does not correspond to any full-on actual solution.”
I’m not making this up as merely an illustrative story; this is actually how al-Khwarizmi discussed quadratic equations in the monumentally influential “Liber Algebræ…” from which we get the term “algebra” (and “algorithm”, for that matter). Six different cases of quadratic equations handled separately, based on which sides of the equation the quantities involved arranged themselves on when left positive, directly considering only positive solutions.
Negatives weren’t construed as numbers; they arose simply as comparisons, a relation between the two positive sides of an equation, either capable of being larger than the other, but this not corresponding to two signs of number which may always be brought to the one side.
Eventually, however, we saw the value in reifying all the rules of reasoning about arbitrarily oriented differences between quantities on different sides into rules of arithmetic for signed quantities. It was the exact same reasoning as we ever carried out circuitously, but now viewed and described less awkwardly. All that blather about differences and sides of equations and so on WAS the rules of arithmetic for signed quantities, not some reason to reject signed quantities from numberhood.
And similarly, going back to the ∞ example above: all the blather about limits and and asymptotes and so on IS the rules of arithmetic for ∞s, not some reason to reject ∞ from numberhood. The student who originally phrased their reasoning not explicitly saying “limit” all over the place didn’t make any mistakes; they have internalized such limit-based reasoning into their calculations with the number ∞, just as the student who uses the arithmetic of signed quantities isn’t not in so doing making any mistakes, but has simply internalized appropriate rules into such calculations that previously would be spelt out more cumbersomely.