A method for counting real numbers?

I wish I’d thought of that.

You have matched the integers to the terminating decimals. Since they are all rational numbers, you have matched the integers to a subset of the rational numbers. Since there is no problem matching the integers to the entire set of rational numbers, that is no feat. Incidentally, you don’t need an ordered list to apply Cantor’s diagonal argument.

Actually, what it seems the OP thinks they have done is correspond infinite decimal strings starting to the left of a decimal point and carrying on to the left, potentially without termination, with infinite decimal strings starting to the right of a decimal point and carrying on to the right, potentially without termination. And, of course, they have done this; indeed, these match up perfectly by mirroring, just as the OP says…

The entire confusion is in the OP misunderstanding “integer” as including infinite decimal strings carrying on to the left, potentially without termination (see their comment “Doesn’t the integers include infinite numbers? If so, then why isn’t 333… an integer?”). Everything else was fine; they just didn’t know what “integer” meant.

(So, their argument is a perfectly fine one that the “10-adic naturals” map surjectively onto the unit interval…)

Yes. That would mean that the reals are “countable”, and thus have cardinality Aleph(null)

Another, perhaps more fundamental, problem with what the OP was doing is that he started off by describing a mapping from the integers to the reals. But what he really should have been trying to do is describing a mapping from the reals to the integers.

Also, for what it’s worth, I think it’s a shame Cantor’s argument is so often presented in the context of “real numbers”, which are messy because of the 0.9999… = 1.00000 thing, instead of just directly in terms of digit-sequences, or better yet, functions.

Or even better yet:

The content of the assertion Cantor proved is the same as in the following:

Suppose you have some rule by which, from any two “whatsits”, you can determine a result. (This can all be quite arbitrary, but for the sake of evocative language, let’s refer to this result as the “description” the first whatsit would give of the second whatsit). Suppose furthermore that there’s some method by which, from any description, one can produce a distinct “opposite” description.*

One way of describing whatsits is to always describe them oppositely from how they would describe themselves.

We might ask whether there’s any whatsit that behaves like this. The answer is automatically no, because such a whatsit would have to describe itself oppositely from how it would describe itself.

That is the mathematical substance of Cantor’s diagonal argument and I doubt anyone has grave difficulty wrapping their head around it.

If you like, you can then rephrase this to say “See, this means there are more potential behaviors than there are whatsits, because there’s no way to come up with a rule by which every potential behavior is carried out by at least one whatsit. There’s too many potential behaviors and too few whatsits for the latter to cover all of the former, no matter how you try it.”

But I think that rephrasing, entrenched though it is, just engenders more confusion than illumination about what is going on.


*: For example, we could take descriptions to be numbers, the opposite of n to be n + 1 (there’s no requirement that double-opposition cancel out), whatsits to be words, and the number determined by two words to be the length of the first word plus the squared length of the second word. In that case, we’d say “cat” describes “dog” as 12, “cat” describes “house” as 28, “house” describes “cat” as 14, “moon” describes “moon” as 20, and so on.

No, there’s nothing wrong with that. In fact, the most generalizable presentation of Cantor’s argument is that there’s no surjection from X to D^X [given a function from D to itself with no fixed point]; there are many (non-classical) contexts, however, in which there is an injection from D^X to X.

eh, ignore this post

That assumes that there are no errors in the proofs that counting real numbers, or trisecting an angle, are impossible.

Granting that mathematicians are 99.999% sure that Cantor’s proof is valid, and 99.99999999999% sure that the trisecting proof is valid, can we ever be 100% sure that we haven’t missed something?

Well, perhaps in some philosophical sense no one can be 100% certain of anything, but what of it? Isn’t “As certain as anyone can ever be of anything” certain enough for Thudlow to have license to say what he said? Surely you can’t expect everyone to explicitly qualify every statement they make with “To the best of my knowledge, barring some potential error in my reasoning”.

I’d rather strike that last line and replace it with “Surely you can’t expect everyone to refrain from ever expressing certainty about anything?”

No, I don’t expect that. But that is quite different from saying that any attempt to prove X is doomed to failure, which is what I was addressing.

Any scientist worth his salt knows that all theories are provisional, and that even Newton was displaced when it turned out that the seemingly safe assumption that time flows at the same rate for everyone turned out to be false.

I realize that a mathematical system built from axioms and definitions is more tractable than a theory describing real-world physical phenomena, but I think the basic principle still holds.

In my own pathetic attempts to master mathematics, I often fall afoul of unwarranted assumptions. Just the other day, I was reading something that seemed to me to be a valid proof, but in the next paragraph, the book said something like, “of course, while plausible, this is not a proof, because it assumes that every finite set of integers has a least member,” which is something that I would not have questioned.

I assume that was obvious to a real mathematician, but I’m just wondering if more sophisticated mathematicians can be similarly tripped up by more subtle assumptions, and that very subtle errors can resist the attempts of all mathematicians to find them for centuries, just as the proof of Fermat’s “last” theorem took centuries to discover.

Every finite set of integers does have a least member… are you sure that’s what the book was questioning? [Or perhaps, for whatever reason, in that particular context in the book, the author was interested in proving something without this assumption (for example, perhaps their goal was to prove that “Every finite set of integers has a least member” itself follows from some other plausible facts, rather than more trivially showing that it circularly follows from itself)]

This doesn’t matter too much–there are trivial mappings between [0,1] and [-inf,inf]. For instance, x’ = sgn(x-0.5)/(0.5-abs(x-0.5)). The hard (i.e., impossible) part is going from the integers to [0,1], or indeed any other interval of reals.

The point brocks’ book might have been making is that a proof isn’t really complete without also including (at least by reference) a proof of every theorem used in the proof. I’m not sure how difficult the proof is for every finite set of integers having a least member, but maybe the book just didn’t want to digress into that.

Well, the question has no well-defined answer, because it depends on what system one intends to prove this within. There’s no such thing as proof simpliciter; only proof in this system and proof in that system. If one takes it as axiomatic that the integers are totally ordered, one will have a much easier time than if one takes the putative ordering to be defined only recursively by a <= a and a <= b implies a <= b + 1, for example.

Though, I did happen to realize one other objection which might be raised: I was wrong to state that every finite set of integers has a least member. Rather, every inhabited finite set of integers has a least member; the empty set, of course, has none.

My misconception is a set of infinite numbers doesn’t include infinitely long numbers. How counter-intuitive. Then again, a number’s value is always greater than it’s number of digits (except 1 and 2), so the value of an infinitely long number is … greater than infinity? Thanks everyone for your replies.

For what it’s worth, which may help dissolve the apparent counter-intuitiveness: The integers aren’t “a set of infinite numbers”; they’re an infinite set of finite numbers. That is, every particular integer is finite, but there are infinitely many integers. [Just like every fraction between 0 and 1 is finite, but there are infinitely many fractions between 0 and 1]

[Also, I think most people would say 2’s value is larger than its number of digits, but pointing this out is just pedantry on my part… :)]

Oh, how did 2 sneak in? I shouldn’t post from a car.

No, I got it wrong, to my utter lack of surprise. The correct statement should have been, “We assumed that every nonempty subset of N contains a least element.”

N is the set of natural numbers, i.e. {1, 2, 3, …}. I confess that the two statements seem equally obvious to me, which is probably why I misremembered it.

If you google the quoted statement verbatim (including the quotes), you will get one hit from Google Books, which will show you the context, if you’re interested.