Cantor infinities and zeros

Georg Cantor established that there are many levels of infinity (e.g. his diagonal proof), showing that there are in some absolute sense more real numbers than integers.

So should we consider that there is more than one value of ‘zero’ as a reciprocal?

There are hyperreal numbers, where the reciprocal of an infinite number is an infintesimal, not zero.

Don’t click on that link if you’re high.

If we’re limiting ourselves to the real numbers and/or integers, infinity is not a number, and so we can’t just divide by it. We can take the limit of 1/x for x going to infinity, in which case the result is strictly zero (and it doesn’t matter if it’s the infinity of the reals or integers).

As ricksummon mentioned, there are various extensions to the reals that allow for a non-zero reciprocal of an infinite number. But then, the number is not zero–it’s an infinitesimal. There is still only exactly one zero and dividing by it is undefined.

There is also the real projective line, which does allow dividing by zero, but does not have infinitesimals. It just has a single infinity that’s the result of any x \over 0 for x \neq 0. You also lose some nice properties of the reals.

Getting back to the hyperreals, yes, we can treat the infinity of the integers and of the reals as having distinct inverses, and that using the “standard part” function (the function which discards the infinitesimal portions) as giving a proper 0 in both those cases. But again, there’s still just one 0.

If I might make a request- Due to various things I never made it past half a semester of trigonometry (Though I like to think I could pass now), could somebody please explain this thread in small words?

One way to think of “infinity” is that if you divide 1 by infinity you get 0. But if there are more than one infinity, is there more than one 0? And the answer appears to be “not really”, though there are systems that treat the result of dividing by different infinities as different, they’re also not 0.

I admit I might have misunderstood though. I’ve made it well past a semester of trigonometry, but they are very big words.

It is a category error to confuse the cardinal infinities, the ordinal infinities infinities and the non-standard infinities. Here is a model for non-standard reals. It is not the best model for calculus, for reasons I will not explain, but it is a good model for thinking what non-standard numbers might be. This will be for positive reals. You can just add 0 and negative numbers afterwards. Say that a sequence a_0,a_1,a_2,... converges if it either converges in the usual sense or if for all integers N, there is an integer n such that a_i>N whenever i>n. In that case we say it is an infinite sequence. Say that two sequences are equal if they are they are eventually the same. That is \{a_i\}=\{b_i\} if there is an integer n such that a_i=b_i whenever i>n. An ordinary real number is just a constant sequence. An infinitesimal is one that converges to 0. The inverse of an infinitesimal is an infinite number, as described above. You add, conditionally subtract (a smaller from a larger) multiply and divide termwise. There will be many infinitesimals and just as many infinite numbers. For example, the sequence 1,1/2,1/3,1/4,... is much larger that 1,1/2,1/4,1/8,... in if fact infinitely larger since their quotient 1,1,4/3,8/4,16/5,32/6,... is infinite.

Subtraction is a bit tricky since one sequence may start out with a few large terms and then get smaller. In that case, just put in anything at all for those terms since a finite number of terms don’t matter.

A real construction of non-standard numbers would use all sequences modulo an ultrafilter which is too complicated to explain here.

I can’t give it a full explanation right now, but let me start with some of the basics.

Math has the notion of a number system, which is basically the name we give to a certain set of numbers and the rules we’re allowed to operate on them with.

The integers are one number system. This is the numbers -3, 0, 5, 1000000, etc. Note that when you add and subtract two integers, you always get another integer. 1000000+5=1000005. Same if you multiply two integers. But if you divide some numbers, you might get something else. 1 divided by 3 isn’t an integer, so we say the result is undefined within the integers.

The “real numbers” are another system. Don’t read too much into the word “real”; it’s just a name. The real numbers include all of the fractions, like 1/3, as well as numbers like pi. It’s a really “big” system.

There are all kinds of other number systems. Mathematicians are inventive! There’s one that sits between the integers and the reals, called the “rational numbers”. Not rational meaning logical, but meaning “ratios”. So 1/3 is a ratio between 1 and 3, and is thus a rational. But pi is not a rational, because there’s no fraction that makes pi. You can get close, like 22/7 or 355/113, but never exactly. You need the real numbers to be exact.

In all of the normal number systems, dividing by 0 is not allowed. It’s just undefined, because it produces a result not in the system. Maybe we say it’s infinity… but infinity isn’t a number in the integers or the reals. So the result doesn’t count and we just say it’s “undefined”.

But again, mathematicians are inventive. And some have invented number systems that contain infinity. Some of them allow dividing by zero. Others allow dividing by infinity for a result that’s not quite zero, but smaller than any real number. 0.0001? Smaller than that. 0.0000001? Smaller than that, too. Add in a million extra zeroes, and it’s still smaller. We call those infinitesimals.

It can be hard to answer questions like this because you can always try adopting a new rule, and seeing what happens. You can allow dividing by 0, but then some other things might break. Like, are 1/0 and 2/0 different? If so, then you have two infinities… or infinity infinities, since then you also have 3/0 and so on. You always lose something when you bend your previous rules.

Here are a couple resources to give some background about what the OP is talking about with respect to Cantor and levels of infinity. (The first, in particular, is a very good introduction to these ideas, while the second has articles discussing various individual subtopics.)

I myself am not entirely sure I understand the OP’s question; but I think @naita got the gist of it:

But Cantor’s “more than one infinity” are really more than one size of infinite sets—more than one infinite cardinal number. And as far as I know, we don’t really divide by cardinal numbers, so it seemed to me that the question involved a sort of category error. Then I saw in the very next post that @Hari_Seldon (who is one of the most mathematically knowledgeable of Dopers) said


Thanks, everybody.

I have always been deeply suspicious of infinitesimals. Sure, it’s part of the math game to create new ‘objects’ with operator rules and then reason about them.

But infinitesimals were supposed to put calculus on a firm logical footing, as I understand it. Let’s go back to calc 101, where we find the derivative of x^2. We add a small dx (usually written as a delta, but we’re in plain text here).

So the new value is (x^2 + 2xdx + dx^2) and the difference is (2xdx + dx^2). So the ‘slope’ is (2xdx + dx^2) / dx, or (2x + dx).

Then of course we take the limit as dx goes to 0. But what happens to that extra dx? At what point does it become an ‘infinitesimal’?

There’s a lot of set theory machinery thrown around in this area, but I think I recognize sleight-of-handwaving when I see it.

Who said calculus was not on a firm logical footing? Though it may not have been completely so by modern standards when the idea of infinitesimal quantities was first floated in ancient times.

Which textbook are you looking at, and how does it define the derivative?

In any case, if dx is infinitesimal then it is not necessary to take another limit because 2x + dx is already infinitesimally close to a real number, which will be the value of your derivative.

If by “machinery” (or slight-of-hand :slight_smile: you mean things like Dedekind cuts, ultrafilters, formal power series, and algebra then that is the opposite of slight of hand—it is the logical footing you are looking for.

Your understanding is incorrect. Infinitessimals were used in the early days of Calculus (i.e. the days of Newton and Leibniz), before it was put on a firm logical footing.

To put things on a firm logical footing, later generations (Cauchy, Weierstrass, etc.) came along and reformulated things in terms of limits.

Alternatively, infnitessimals can be defined and used in a logically sound way. This is “nonstandard analysis,” as developed by Abraham Robinson in the 1960s.

If you’re talking about the way Calculus is taught nowadays, as opposed to the days of Newton and Leibniz: it doesn’t “become an infinitessimal.” It simply becomes zero, in a way that can be made precise and logical based on the way limits are defined.

That’s an interesting point. Did Newton or Leibnitz actually use the concept of infinitesimals? I’ll have to go and research that, and the Robinson ideas too.
It does seem that early calculus theory didn’t use the concept of limits the way it is tought today.

I am just repeating what others have said, but originally calculus was done with something like infinitesimals, and did not have a firm logical basis. This was supplied by Cauchy and others with their theory of limits. For the derivative of x^2 for example, this was
\lim_{h\to0}\frac{(x+h)^2-x^2}h=\lim_{h\to0}2x+h=2x, no chicanery.
If you prefer to write dx for h go right ahead.

Now what Abraham Robinson did was to define a rigorous theory of extended real numbers in which every finite real number was the sum of an infinitesimal and an “ordinary” real number (which concept was well-defined) and he defined the derivative of a function f as the ordinary part of \frac{f(x+h)-f(x)}h for an infinitesimal h provided that ordinary part didn’t depend on h. Obviously for the x^2 function it doesn’t. Incidentally, he also defined f to be continuous if for all infinitesimal h, f(x+h)-f(x) is infinitesimal.

I must say that in my el cheapo example of a kind of infinitesimals, I erred in the equality relation. It should have been the reflexive, symmetric, transitive closure of the relation defined by
which is to say that truncating a sequence by lopping off a finite number of terms doesn’t change it as an extended real number.

I’ve always felt the opposite. Infinitesimals always felt like the right way of doing things. They were clearly intuitive since both Newton and Leibniz happened upon them. But even aside from that they feel very natural to me.

The espilon-delta limit approach on the other hand just feels like a hacky workaround. It’s just a human way of solving a problem that we don’t have a more direct approach for.

The universe needs derivatives; they show up all the time in physical law. Surely the universe has nothing to do with limits. It needs an answer for the velocity of a particle at some place and isn’t going to compute it as the limit of position differences.

No, clearly whatever number system the universe runs on allows it to determine these things directly. There is extra knowledge attached to each number somehow, and the infinitestimals would be one place to put it.

Eh. Applying the hyperreals to calculus really only requires two ingredients, a difficult one and a hard one. The difficult one is the transfer principle. I can’t begin to explain the proof but in short, it says that every statement (suitably expressed) that can be made about the reals can also be made about the hyperreals. In short, it’s a fully-fledged number system that we can work with exactly as we had been with the reals.

The easy one is the standard-part function, which strips away the infinitesimal stuff and gives just the real-number part. That’s how you extract your answer after doing your work in the hyperreals.

It’s all pretty straightforward. And you never have to accept the strange premise that the limit of some process is really the same thing as the process evaluated at that value.

I’ve suspected that the delta-epsilon definition for limits would be easier to understand if students started by studying limits of infinite sequences instead.

To say that the limit of a sequence is, say, 7 (i.e. that it converges to 7) means that there’s some point in the sequence where all the terms after that point differ from 7 by less than 0.01. And there’s some point in the sequence (not necessarily the same point) where all the terms past that point are within 0.001 of 7. And this statement remains true if you replace 0.01 or 0.001 with any small but positive number (any \epsilon > 0).

This strikes me as being relatively easy to understand intuitively and relatively easy to formulate mathematically—and then not too tough to adapt to other kinds of limits, like those of functions of real numbers.

If you are getting at quantities that are “really, really small”, then I am not sure why you say that one approach is a hacky workaround but the other is not; these seem like ultimately equivalent ways of describing the same thing.

Nevertheless, I am not sure about hyperreal numbers, but it seems you cannot get too far in geometry without at least considering the possibility of extending your coordinates with things like “dual numbers” a+b\varepsilon where \varepsilon^2=0, especially if you want/need to work more algebraically and avoid explicit limits.

Sure, I agree that there are probably better way to teach limits. And I obviously don’t think there’s anything fundamentally wrong with limits. It’s just a question of naturalness. Limits address the questions of calculus by treating it as an approximation which we can make arbitrarily good. Infinitesimals put extra information at each number.

I don’t disagree. After all, they give the same answers to the same problems. It’s only a question of the “feel”. You can think of the hyperreals as providing an “infinite zoom” that allows you to see what’s happening at a scale guaranteed to be smaller than any real number can represent. For limits, it’s always a finite zoom; one that’s never quite good enough (except in the limit).