So it’s not even really known whether the continuum has the cardinality 2^aleph-0? I thought that part was definitively established.
I’m afraid ricksummon is correct and ultrafilter is not; even without choice, ZF can prove the equivalence of the cardinality of the reals and the powerset of N; you note first that the powerset of N can inject into the reals (represent a subset of N as a sequence of "0"s and "2"s to obtain the ternary representation of some real in [0, 1]), and then secondly that the reals inject into the powerset of N = the powerset of the rationals (represent a real as the set of rationals less than it). Then, you apply the Cantor-Schroeder-Bernstein theorem to conclude that the two are in bijection.
C-countable is equivalent to B-countable (in classical logic, though not intuitionistically), but I put it out there to see what you were taking as the direct definition of countability.
I don’t differ with you here; after all, I take the Interp function to be partial.
Where we differ is in your claim that your argument for the existence of an undefinable binary sequence is non-constructive. But the argument is entirely constructive (in the informal sense of only proving existence via explicit constructions; in the formal sense of “constructive logic” as intuitionistic logic, the argument can be made to go through just fine as well, but some care must be taken), as I tried to explain in my last post on the matter, showing how each of the steps of the argument is constructive. And this is troubling, because being a constructive argument, it establishes the existence of an undefinable binary sequence by defining one, which is quite paradoxical.
As long as no one tips off-- oh shit, it’s the mods! Cheese it!
By “a map witnessing the countability of X”, I mean an injection from X into the natural numbers (I was trying to be somewhat agnostic as to what countability meant, but now there’s no need).
So, writing out what appears to be Chronos’s argument a bit more explicitly, and showing the constructivity, taking “countable” to mean “B-countable”:
How do we know that the English strings are countable? We explicitly define a bijection between English strings and a subset of N [Let’s call this function ASCIICode (as an evocative name, even if we don’t use actual ASCII)].
How do we know that the definable binary sequences are countable? The Interp function gives us a surjection from those English strings which define binary sequences to the definable binary sequences; this may not be a bijection per se, as more than one English string can define the same sequence. But, by restricting attention to “earliest” definitions, this becomes a bijection between the definable binary sequences and some subset of the English strings [Let’s call this function EarliestDefinition]. Combined with the above, we have constructed an explicitly defined bijection between definable binary sequences and some subset of N [Given by the composition ASCIICode o EarliestDefinition].
How do we know that no countable set contains every binary sequence? The diagonalization argument explicitly defines, from any bijection between a set of binary sequences and a subset of N, a specific binary sequence not in that set [let’s call this higher-order function FlippedDiagonal]. Combined with the above paragraph, we have constructed an explicitly defined binary sequence which is not definable in English [specifically, FlippedDiagonal(ASCIICode o EarliestDefinition)].
But, our argument having been carried out English, this is paradoxical!
If we unwind the definitions to see what exactly FlippedDiagonal(ASCIICode o EarliestDefinition) is, we see that it is “The binary sequence whose n-th bit is as follows: if n is the code of an English string which is the earliest definition of some binary sequence, then take the opposite of the n-th bit of that sequence. Otherwise, just take a default value of, say, 0.”. It is this specific sequence which the argument proves undefinable.
I still don’t like that Interp function, since I’m very uncomfortable with a function with an unknowable domain. Is “The smallest integer greater than the largest pair of twin primes” a valid argument for Interp, for instance?
Nor do I need to use the Interp function, or anything else with any connection whatsoever to the English language, to prove that a subset of a countable set is countable.
If you’re very uncomfortable with a function with an “unknowable” domain, then what does it mean to you to say that {English strings which define binary sequences} or {Earliest definitions} are countable? It can’t mean that some kind of function from them to N exists, or that some kind of function from some unknowable subset of N to them exists. So what does it mean?
Also, the binary relation “The string x defines the sequence y” is the same thing as the Interp partial function; calling it a partial function is just noting that no string defines more than one sequence. But no matter. It’s not really the Interp function that concerns us if we’re going with B-countability rather than C-countability; it’s only an indirect way of getting at the EarliestDefinition function (which sends definable binary sequences to their, uh, earliest definitions) which is more directly useful.
Of course you don’t need anything in connection with the English language to prove that a subset of a countable set is countable; but this proof is still an explicitly definable construction. What is the proof that any subset of a countable set is countable [where “countable” means “injects into N”]? Presumably, it’s this:
(1) If A is a subset of B, and f is an injection from B into N, then the restriction of f to A is an injection from A into N. Thus, we have shown that A is countable.
You can also show that 2^N is uncountable without any connection to the English language. Presumably, the proof is this:
(2) If C is a subset of 2^N, and g is an injection from C to N, then we can define a binary sequence Diag(C, g) whose k-th digit is the opposite of the k-th digit of the binary sequence which maps to k under g, if one does, and 0 otherwise. Clearly, this sequence is not contained in C. Thus, we have shown that C is not all of 2^N.
Finally, you’re also going to need that any set in 1-1 correspondence with a countable set is itself countable. Presumably, the proof is this:
(3) If g is an injection from A to N, and h is a 1-1 correspondence from C to A, then the composition g o h is an injection from C to N. Thus, we have shown that C is countable.
You can do all that without talking about the English language or anything else. Absolutely. But… (let me split this post up)
[Er, replace the "g"s in (3)'s proof with "G"s, to keep them distinct from those in (2) [not that it technically matters, but it might be easier to follow]]
Note what combining (1), (2), and (3) gives us. It tells us that any collection of binary sequences in 1-1 correspondence with a subset of a countable set is missing a binary sequence. Exactly what we want, right? But note that this combined proof works via a combined explicit construction: if C is a subset of 2^N, and h is a 1-1 correspondence from C to A, and A is a subset of B, and f is an injection from B to N, then Diag(C, f o h) is a binary sequence not contained in C.
Once you apply these proofs to the particular case of English-definability, you’re just substituting particular things in for A, B, C, f, and h. Specifically, {Definable binary sequences} for C, EarliestDefinition for h, {Earliest definitions} for A, {English strings} for B, and ASCIICode for f. Thus yielding:
Since {Definable binary sequences} is a subset of 2^N, and EarliestDefinition is a 1-1 correspondence from {Definable binary sequences} to {Earliest definitions}, and {Earliest definitions} is a subset of {English strings}, and ASCIICode is an injection from {English strings} to N, then Diag({Definable binary sequences}, ASCIICode o EarliestDefinition) is a binary sequence not contained in {Definable binary sequence}.
Your proof of the existence of an undefinable binary sequence is purely a substitution instance of (1) + (2) + (3); this substitution instance specifically establishes that the binary sequence Diag({Definable binary sequences}, ASCIICode o EarliestDefinition) is undefinable. Yet, at the same time, we have defined it. And thus, the paradoxical troubles…
OK, then, how would you resolve the paradox?
Oh, you overestimate me :). But I can at least tell you what others have proposed:
The standard (Tarskian) response is to bite the bullet and say “x defines/refers to y” is, despite all appearances, actually not English-definable; or, more sophisticatedly, to stratify into a series of increasingly more expressive languages: English_0, English_1, …, English_Omega, English_{Omega+1}, …, English_{Omega^2}, …, each containing “defines/refers to” predicates only for the previous languages in the series. But the former seems flat-out wrong, and the latter, if nothing else, multiplies entities to an offputting degree without forceful justification.
The Kripkean response is to bite a different bullet, admitting that all definable terms do have fixed points and even dragging everything into an order-theoretic framework and imposing suitable conditions to guarantee this (e.g., “References take values in complete lattices and definable functions must be order-preserving” or “References take values in Scott-domains and definable functions on these are continuous”, just as is usually done in denotational semantics of programming languages); most importantly, logic is reformulated accordingly (e.g., one approach uses a three-valued logic with truth values True, False, and Ungrounded, in which Ungrounded serves as the required fixed point of negation). I’m a little on the fence about this; I go back and forth. I think there is something interesting here, and I have great sympathy for the approach of “The problem should shape the logic chosen to analyze it, and not the other way around”, and for keeping in mind the analogies with programming languages when analyzing the paradoxes of natural language, but, at the same time, the expressive restrictions this framework adopts are a bit hard to motivate, and it does sometimes seem (as does the second Tarskian approach) as perhaps basically just math-wanking at a level far removed from application to the actual purposes which motivate the study of natural language semantics. Highly formalized angels-on-pins, so to speak…
Which brings me to my own inchoate views: well, they are in constant flux. But I’ll ramble them down for a bit. I think the key is to ask ourselves, “What is the concept ‘x refers to y’ supposed to do for us? What activity is this language-game a part of, and in what manner?”. To figure out the meaning of “refers to”, we must look at the use. When we do this, I think we are led to give up/seriously modify the idea that “x refers to y”, “y is a definable binary sequence”, etc., are actually fruitfully analyzed as concrete predicates; they are just abstracted way of speaking about the dispositional behavior of human actors, behavior whose rules are generally fuzzy, and often not actually determined until applied, and even then subject to revision, etc., etc. One analogy here, for me, is with soritical predicates, or “vague predicates” more generally; if I were asked “Is a three-foot tall elevation a mountain?”, my unhesitating answer would be “No”. If I were asked “Is a 20,000-foot tall elevation a mountain?”, my unhesitating answer would be “Yes”. If it were then said to me “Aha! So there must, by irrefutable mathematical logic, be a cut-off, a particular foot which makes the change from not-a-mountain to mountain”, I could only say “No, not really; you are being blinded by a system of logic not appropriate to this analysis. Mountainhood, which is to say, my disposition to call things mountains, isn’t well-modelled by the kinds of predicates you are thinking of. There isn’t always a concrete fact of the matter; there are “grey-area” heights (and which these are is a grey-area as well), on which, depending on how and when you ask, and random factors besides, I might take one position or another, I might quibble, I might debate, I might laugh and say ‘I dunno. Why do you ask?’, or I might just give this speech. Some English-definable properties of numbers are fruitfully modelled for some purposes by functions from N to 2; but, as goes the line of inquiry you are pursuing, mountainhood isn’t one of them”.
“x defines y”, “y is definable”, etc., are like “x is a mountain”. For some purposes, it is fruitful to model the semantics of limited parts of English via traditional mathematical functions and predicates as we know and love them. When we start asking questions about definability from unrestricted English, the kinds of questions we’ve been asking, then such modelling stops being so fruitful. That we cannot consistently construct functions which have the properties we ask of them should be no great surprise or disappointment, once our semantical project has slipped away from having such functions track human behavior (in its observedly non-paradoxical glory) in any particularly direct sense.
Yeah, alright. See title. But at least now I can’t be accused of pussying out on the challenge :). At any rate, regardless of whether I can resolve the paradox or not, I think it’s important to note its presence rather than gloss over it.
Well, the infinite expansion of reals is indeed an easy way to go, but not the way you’re thinking.
First, use base 2 not base 10 and stick to the reals in between 0 and 1. Take a binary number like .10011010… . The mapping is to the set {1,4,5,7…}. I.e., if the ith bit after the binary point is a 1, then i is in the set.
But there is one glitch in this. Numbers with an infinite trailing sequence of 1’s muck things up. Fortunately, there are only a countable number of those so they can be “squeezed in” my modifying things slightly. This is also in issue in Cantor’s (Remember Cantor? Wasn’t there a thread about him recently?) proofs of cardinalities.
If not constructing a bijection is your goal, but just showing equivalent size, then think of it this way: The mapping from reals to subsets shows that there are no more reals than subsets. (Assuming you know that the cardinality of reals in [0,1) is the same as the cardinality of reals.) The reverse mapping shows that there are as many subsets as reals plus a countable leftover. Continuum+countable is continuum.
ultrafilter’s response deals with an entirely different issue that I am not qualified to comment on.
Or, rather than worrying about “squeezing in” anything, you can just use base 3, and just the digits 0 and 2… no two of these map to the same real number (these sequences describe, and are in bijection with, the Cantor set).
ultrafliter’s response was simply mistaken; perhaps he had misread ricksummon’s post.
Good, because I was with you up until that point. But if you’re going to spout rash assertions like “23 ≠ 0” without proof, you’ll lose your credibility on this board.
Now I’ve personally verified that n ≠ 0 for n=1 through 6, and I suppose it’s plausible that the pattern continues all the way up to 23 — but hey, what if you’re wrong?
I misremembered a bit about CH. One of these days I’ll realize that I don’t remember this stuff, but until then you can continue to correct me.
Amusing, but there are in fact fields where 23 does equal 0. For instance, the rationals (or reals) modulo 23. Now, the reals modulo 23 might not be a field that gets used very often, and it’s certainly not one of the default fields you’d assume if it weren’t specified, but it’s still perfectly valid.
Don’t worry; I’ve made my fair share of mistakes on the board myself. In fact, if you see a post of mine without a “Last edited by” message, it’s safe to assume it contains at least one thoroughly incorrect statement.
Speaking of corrections,
As I understand the terms, neither the rationals nor the reals modulo 23 would be fields (since 11.5 and 2 would both be non-zero, yet their product would be zero). But, you have the right idea; the integers modulo 23 do form a field where 0 = 23.
OK, I was thinking of the integers mod 23, too, but I wasn’t sure if it had multiplicative inverses. I suppose it would have to, though, 23 being prime and all.
Sorry to bump this ancient thread, but I’m have a tough time understanding Cantor’s diagonal argument.
What if there was a list like this? Assuming there were ellipsis at the end of all the numbers.
The digit-sequence necessarily missing from the list isn’t “What you get by going down the diagonal”. It’s “What you get by going down the diagonal and changing each digit”.
For example, if I choose to change digits by increasing them by 2 [wrapping around if necessary], in your example, the missing sequence would be 0.311702…