Help me untangle this logical paradox involving mental conception

It’s the same thing. A simple system that produces an output that approximates something more complicated is data compression.

My conception of a ball is far simpler (in terms of information) than the actuality of the ball. However, the predictions of my simple model do a decent job of anticipating the future behavior the real ball.

Fine by me; we can rephrase the OP’s sentence as “the mind can implicitly conceive something it can’t explicitly conceive,” or some such. It can indirectly refer to such a concept, describing it in all sorts of general ways, without ever going into specific detail.

That’s what happens when you take LSD/Shrooms and listen you reggae… The first time I partook I strummed a G Chord for 2 hours basking in the yellow glow surrounding me and imagining native aborigines hearing this for the first time… or something like that…

Only because we can wrap them in a lossy abstraction.

“x is 3434543230091234” contains far more bits of information than “x is random”, but “x is random” is a useful compression for many purposes.

On the other hand, if it’s important to make a distinction between the specific values of TWO random numbers, say “x is 3434543230091234” and “y is 94662200120”, then the drawbacks of the lossy compression of both to “x is random” and “y is random” become immediately apparent.

Dig it. But, isn’t the OP saying either:

*The mind can’t implicitly conceive of something it can’t explicitly conceive of (which isn’t the case). *

OR

The mind can’t explicitly conceive of something it can’t implicitly conceive of (which is the case).

In the first, you can imply, imagine, get the gist of, or hypothesize about all sorts of stuff you could never fully understand, explain, define, theorize or even being possible at all.

And the second, how could you even begin to understand, define, etc. anything you can’t even vaguely imagine or or conceive of in the first place?

I believe so. I believe he’s saying the second one: he can, as you say, “imply, imagine, get the gist of, or hypothesize about all sorts of stuff you could never fully understand, explain, define,” and so on.

I assumed he meant a loose and lossy kind of compression.

It’s essentially the Church-Turing thesis (or Deutsch’s extension), so it’s not provable – it’s always possible that some model of computation, physical or otherwise, exists that is strictly more powerful than Turing machine equivalents, but nothing of the sort is known (and I’d argue that known physics makes it very unlikely for such a thing to be physically realizable).

As for formal languages, there’s a broad correspondence to computation: for any formal language, you can find an automaton that produces all the language’s theorems, and vice versa. So I don’t think there’s a truly deep distinction here.

For any complete system, any true statement is a deducible theorem, so it’s not the case that there necessarily are any non-describable concepts in some language.

No, any consistent naming scheme (using finite length strings of symbols) can at most name as many things as there are natural numbers (since it is equivalent to enumerating these things). Since there are more real numbers than naturals, for some (actually, almost all), there will be no name under this naming scheme.

Well, but randomness is an essential part of a random number, and if you compress it lossily, then you lose that part, because anything you decompress from this would not be a random number. So whatever you ‘conceived of’ in that way wouldn’t actually be a random number.

Implies to me that absolute disorder is difficult and/or impossible, one is tempted to say “unnatural”. But is it a feature inherent in mathematics, or inherent in nature?

You aren’t thinking about it in any literal sense. You are thinking about the possibility of a child being born. You aren’t thinking about the child that will be born. You may be thinking about what the child may be like. But you aren’t thinking about the child. You may be thinking about a possible child. But you aren’t thinking about the child.

Seriously, dude, you have to be careful in your semantics here. That is the crux of this discussion and there is no point in having it otherwise. If you argue that there is nothing we can’t conceive of based on examples like the above, you are using loose definitions indeed.

It’s the reference of my thought. It’s what determines the truth condition of any thought I have with it as a subject. Etc. It’s hard for me to see how I could not be thinking about it.

But maybe it’s a purely linguistic dispute. What about this claim:

There is nothing of which you can’t think about the possibility?

Is that true or false on your view?

We disagree about something. There is absolutely no call for either of us to talk about who is being more careful. Nothing in the conversation has given any reason for that kind of speculation. You’re being pointlessly insulting here.

Fair enough. I really wasn’t taking him to be talking about “compression” in any technical sense at all. I took him to be loosely talking about handles. Maybe I was being too charitable.

How do you define “think about the possibility”? If the possibility is “Weierstrass’s Theorem”, is it enough for me to think about “approximation” or “taylor series” (getting sort of close), in order to be “thinking” of the “possibility”? The only unambiguous definition we could make would be that “thinking about the possibility of X” requires us to have knowledge of X itself. For example, if X is a possible theorem, we would need to be able to state the theorem, though it may not be proven. Using this definition, your statement is false.

I think you are focusing on the cases where your statement is true, such as those in which “possibility” contains some set of known objects. In the case of the next person to be born on earth this set is the set of variations of the human genome, and all of the infinite possible anatomies and so on that this represents. Therefore you can think of the possibility of X, where X is some element of the above set. The problem is that in other cases the set S may not be known. In these cases, the only way to “think about the possibility” is to construct a set of known objects and assume that X is a member. For example, I don’t believe it is possible to think about the next alien being born in andromeda, because, while we have knowledge of a subset of S, we can never be confident that when we “think about the possibility” the set of which X is a member is complete. This is just another way of saying that while we have pretty good imaginations, we can never be sure 100% confident we don’t have a blind spot without narrowing the set by thinking about that which you are claiming to be able to not think about and yet still think about its possibility.

Whereas I would have said the following:

Suppose I think to my self “The next alien born in Andromeda will have three eyes.”

If there are aliens in Andromeda, then I have thought about one of them.

If there are no aliens in Andromeda, then I have thought about mere possibilia, or mere fictions. (Not sure right now whether that amounts to the same thing.)

What’s actually happening in the real world outside my thoughts can have an effect on what I’m thinking about.

I think you think, on the other hand, that what you’re thinking about is determined purely by what’s going on in your head.

There’s a famous distinction (and debate) between “externalist” and “internalist” views of meaning which we may be running against here. Are you familiar with the “twin earth” thought experiment?

I thought you were arguing:

Above you are arguing that you can think about an alien in Andromeda. We do not know whether there are aliens in Andromeda, or what their characteristics might me. Yet you suggest that by generating a fiction and perhaps getting lucky if it happens to correspond to reality, you are able to “think about an alien in Andromeda”. I contend that this is stretching definitions well beyond their sensible use. It’s like me saying “I’m going to think about the fifth prime number,” but, not knowing math, guessing a number between 1 and 100 to think about. I may get lucky and guess the number 11, or I may not. If you asked me what number I was thinking about 99 times out of a 100 my response would be in contradiction with my declaration that I was thinking about the fifth prime. Therefore the validity of the declaration is not only random, but usually false, and therefore contains very little information.

Yeah, although I don’t really see the relevance here. I don’t think (yet) that is the source of our disagreement.

Of course there are things we can’t think about. We just don’t know what they are, since we can’t think of them.

I’m arguing that there’s nothing you can’t think about. So I’m committed to the view that you can think about the next alien born in Andromeda, whether it exists (or will exist if you prefer) or not. That seems unproblematic to me. If it exists, you’re thinking about something real. If it doesn’t exist, you’re thinking about an unreal possibility. But whether it’s real or unreal, you’re thinking about it.

I have to admit I’m lost–I don’t know what you disagree with here, or why.

Regarding the fifth-prime example: You are thinking about the fifth prime, it’s just that you’ve misidentified it. The sentence “the fifth prime is 2” is false, but it is for all that no less a sentence about the fifth prime. Indeed if it weren’t about the fifth prime, I don’t know how we’d establish that it was false.

It is a sentence about the fifth prime in name only. Informationally, it has nothing to do with the fifth prime. You seem to believe that if we were to write a computer program to output: “I am thinking about the fifth prime”, that it really is thinking about the fifth prime, when in fact it is simply printing characters to the screen. Yes, it is a sentence about the fifth prime, interpreted by you, who understands what “fifth prime” means. But the person who made the declaration is emphatically not thinking about the fifth prime.

Is a person who makes the declaration “I am thinking about xarbqrablernlnlryby” (where “xarbqrablernlnlryby” is just a random string of letters), really thinking about “xarbqrablernlnlryby”? Yes, perhaps in Andromena “xarbqrablernlnlryby” means something, but here on earth, informationally it is simply a random string of letters.

Here is a way I hope to pick apart precisely where our views diverge. Let’s just go ahead and choose a specific thing (not a collection of things, but one single thing), that you think you can think about, and I think you cannot think about. The only trick is that, in me specifying the thing, I transfer information to you that would allow you to think about it, whereas previously my contention would be that you could not have been able to think about it. So we have to play a game here in which you, in good faith I am confident, agree on the thing and then pretend from then on that I did not tell you about the thing. Here is a proposed candidate: please think about the thing I am holding in my hand right now. (The answer is: a yellow piece of paper with ten consecutive digits of the number sqrt(3) written on it.)

Now, you believe that there is no thing that you cannot think about. I contend that you couldn’t have thought about the thing in my hand, because you did not know what thing to think about. Yes, in some vague sense, you can think about the set of possible things I might hold in my hand. You can think about pens, paper clips, or the collection possibilities as a whole. But where do you draw the line? Does thinking about a blue piece of paper with 5 consecutive digits of sqrt(2) count? Does briefly noting that the set of possibilities includes colored pieces of paper and numbers written on them count? There is some serious definitional ambiguity if you wish to maintain that you can think about my proposed thing, having only the information that it is something in my hand.

If we use the unambiguous definition, that is, that thinking about a thing implies thinking about the thing itself rather than a collection of possible things of which the thing may be a member, then you can not think about the claimed thing without guessing and getting very lucky. This implies that your theorem “I can think about the thing in your hand” conveys no information. This can be tested and confirmed experimentally through repeated trials by asking you what thing you are thinking about, and showing that it is statistically uncorrelated what thing I am holding in my hand.

I’ve said nothing to imply that.

Well, I’ve offered my argument that he is, and your objection is that my view implies something I don’t agree that it implies. I guess the onus is on you to explain why you think the implication holds. If you were right–if some element of my argument really does imply that running a simple program constitutes thinking about things represented by its output–then I would need to revise either my view or my argument. But you’ve not yet offered me a reason to think the implication holds.

If it’s a random string of letters, meaningless even to them, then they haven’t communicated anything, so there’s no reason for me to think they’re thinking about anything but, roughly, themselves, thinking, and random strings.