The bar I set is that words should pick out things in the world. I don’t really see how I could set it lower.
No computation can produce that, so no; but that’s a different argument.
First of all, thanks for actually addressing the topic of this thread.
However, this isn’t right: all that’s needed is that there are objects, which are distinguishable from one another (all indistinguishable objects can be lumped together). And again, the metaphysics of the situation don’t matter: it’s perfectly clear that the things we’re talking about when we’re talking about cats are objects in the right sense. Whether they’re, at some deep down level, just bundles of relations, or vibrating strings, or processes, or properties of spacetime-geometry, can’t possibly matter to whether we can talk about them, so that all has to be irrelevant.
What we’re permuting here is simply how the labels we give to objects attach to them. So the effect of the permutation is just that Alice now is called ‘Charlie’. (There’s an awkwardness attached to that, in that I can’t point to Alice directly, and thus have to fix one particular way of referring to it; but this doesn’t mean that there’s some ‘Aliceness’ that only that particular object has. You can think about it somewhat like gauge-fixing: what I’m saying is independent of that fixing, but to say it, I have to use one.)
No. You just have that (what you called before) ‘Alice’, i.e. the cat Alice, is now called ‘Charlie’. They’re the same in all respects; the symbol ‘Alice’ just now picks out a different thing in the world.
That doesn’t change anything; the labels ‘Alice’, ‘Bob’, ‘Charlie’ are just as arbitrary as ‘Object_001’. You proposed to use (trivial, i.e. one-place, one-element) relations to denote something like ‘…is Alice’, i.e.:
R_{1} = \{\langle Object_{001} \rangle \}
R_{2} = \{\langle Object_{002} \rangle \}
R_{3} = \{\langle Object_{003} \rangle \}
With the relation ‘…is a cat’ then being:
R_{4} = \{\langle Object_{001} \rangle, \langle Object_{002} \rangle \}
Say, under your ‘intended’ interpretation, Object_{002} refers to Bob (i.e. a concrete thing in the world), R_2 (yay for inline \LaTeX!) picks out ‘…is Bob’, and R_4 picks out ‘…is a cat’. Then, you do the permutation, such that Object_{002} now refers to Charlie, who isn’t a cat. Then, your utterance ‘Bob is a cat’ will, just as before, pick out Charlie, who isn’t a cat.
I think what you’re misunderstanding is that it’s not the problem that there are different ways to pick out object in the world, and different ways to predicate things of them; it’s that there isn’t one that’s the way ChatGPT uses. So it’s not the case that ChatGPT might be using a different mapping of symbols to objects, it’s that we can equally well think of it as using either of the possibilities, and hence, that it’s not talking about any one thing in particular.
The cyclic group of order 3 can be thought of as the set S_3 = \{1, 2, 3\}, and a relation that gives us the group operation, R_{\circ} = \{\langle 1, 1, 1\rangle, \langle 1,2,2\rangle, \langle 1,3,3\rangle, \ldots, \langle3, 3, 2\rangle\}. When we talk about group operations, we will be wanting to talk about that relation. But, under a permutation, we generally won’t anymore. Suppose we use the permutation h(1) = 3, h(2) = 2, h(3) = 1. When you say, ‘1 \circ 1 = 1’, what you will mean is that \langle 1,1,1 \rangle \in R_\circ. But under the permutation, that’s \langle 2,2,2 \rangle \in h(R_\circ); equally true, but that doesn’t talk about the group operation anymore.
Now, you’re right in saying that we could just ‘re-label’ things, undoing the permutation; but for that, there needs to be a fact of the matter regarding whether ChatGPT means the one or the other. And that’s what the argument shows not to be the case: we can equally well take ChatGPT to be saying that \langle 1,1,1 \rangle \in R_\circ or that \langle 2,2,2 \rangle \in h(R_\circ); but these are different things. One talks about the group multiplication, the other doesn’t; and of course, there are many more equivalent things that ChatGPT could ‘mean’. Thus, when ChatGPT says something like ‘1 \circ 1 = 1’, it could either be talking about multiplication, or about some other structure on the set \{1,2,3\}, with equal justification.
In other words, C_3 is a specific structure, given by \langle S_3, R_\circ\rangle. But as surmised, with that structure, there is a plenitude of different structures, one being \langle S_3, h(R_\circ) \rangle. When we talk about C_3, we want to talk about the former one; but all of ChatGPT’s utterances can equally well be interpreted as talking about the latter, or every other such possibility. Hence, while we’re talking about a definite C_3, there is no fact of the matter regarding what ChatGPT is talking about.
Likewise with something like ‘1 is the identity element’. Under the permutation, not only won’t it be talking about the element that was previously called ‘1’ anymore, it would be talking about an entirely different property than ‘…is the identity element’. So, we can’t take this utterance to be about any definite element of the group, even though it clearly ought to be, since there is a definite identity element (whatever that may be called).
Similarly, when we’re talking about prime numbers, we take ourselves to be talking about a specific property of numbers (whatever they may be). But when ChatGPT talks about prime numbers, there is no one property that we could single out as what ChatGPT is talking about; we could take it to be talking about the same concept of ‘prime number’ we’re using, but equally well, it may be taken to talk about something entirely different. Consequently, it can’t be said to be talking about anything definite at all.