The more I think about it, the weirder the ‘systems’-reply seems to me—consider for instance a sentence of 12 words, and each of these words given to a different person. Now matter how hard each person concentrated on their word, it seems highly spurious to me that there would be conscious awareness of the whole sentence anywhere—indeed, I would put such a suspicion about on the same level as speculations about group minds, telepathy, and so on.
But it seems to me that a proponent of the systems reply ought to hold that something very much like that happens: if twelve people each implement a part of a suitably parallelized ‘understanding’ program, then there would emerge a ‘group mind’ awareness of such a sentence, even without any of the people understanding their part. Each of the people would have, for instance, in their attention one particular Chinese symbol, and somehow, this produces an awareness and understanding of that sentence. If now based on the meaning of that sentence, one of the persons could be made to execute a particular action—which seems to me to be a consequence of understanding—, say point to an apple tree, then it seems we’d have something very much like telepathy, or a ‘hive mind’, indeed.
Of course, there has to be at least some information exchange between the persons—but since they individually for sure do not understand Chinese through collectively implementing the program (I think everyone’s agreed on that), the information they exchange can’t be semantic, that is, can’t be related to the meaning of the Chinese sentences they receive and produce. So they seem to be able to create a group mind, with an understanding greater than the sum of their individual understanding, merely by, for instance, talking. So then, is there a group mind associated with every instance of social interaction? Does the US have a mind, exceeding the combination of minds that make it up? And how does this differ from telepathy—it seems to be at least in the same spirit: mental content shared between two or more brains, regardless of their physical connection (which must be present, but transfers no semantic information, no mental content).
Additionally, going back to the primitive ‘lookup table’ implementation, I think we’ve overlooked a bit of weirdness there, too, at least if we wish to hold on to the idea that a lookup table doesn’t produce conscious experience. For, say, I were to implement another program using a lookup table—any program run for a finite time can be, in principle, so implemented, as there are only finitely many possibilities to consider. So let’s say I’m playing a computer game, implemented such that my inputs—key presses, mouse clicks and movements—simply call up the appropriate reaction—sounds and images displayed on the screen—via an enormous lookup table. My experience in playing the game would not be different than in the case that responses to my actions are dynamically calculated; in fact, nothing would differ between the two cases.
But not so if the lookup table is supposed to implement some program that ‘understands’ what I type, and responds appropriately. Again, on my end, I wouldn’t notice any difference—for any finite stretch of time, I can carry on a computation with the lookup table that is isomorphic to one carried out with a program that actually performs some computation to produce appropriate results. But while in the latter case, as most people in this thread here seem to argue, there is genuine understanding, there is none in the former, or perhaps only the ‘warmed-up’ understanding of who or whatever drew up the lookup table in the first place. But this is then a very real difference between the computational implementation of a mind, and the computational implementation of a computer game, or anything else: the lookup-table and dynamical-computation cases are equivalent in the latter, but not in the former case. (If someone wishes to insist that they are, because even in the former case, I’m interacting with the ‘warmed-up’ understanding of some original programmer, simply imagine the—vanishingly unlikely—case in which the computer only makes random replies, which happen to perfectly match a conversation in one, and the reactions of the computer game in the other case; the difference seems very clear here.)
Now if this is right, then the computation of a mind must be different from any other sort of computation. That would be an interesting result in itself, but I’m wondering if it can possibly be made coherent: after all, one can essentially view one system as implementing different computations, and thus, construct a mapping between the computation of the conversation, and the computation of the computer game (such a mapping exists if both state spaces are of the same cardinality). But then, under this mapping, I can view my computer implementing the game as implementing the conversation—and note that I’m not doing anything physically to it. Now let’s instantiate two computers, both of which implement the game, one via the lookup table, and the other via dynamical computation. What happens on both computers is equivalent. Now I can look at the setup via my mapping, and view one computer as implementing the conversation via a lookup table, and the other as implementing it via dynamical computation—meaning that both processes no longer are equivalent: one gives rise to understanding, and the other doesn’t. But nothing has changed physically, so the equivalence between the two computations does not seem to supervene on the physical, which strikes me as a rather strange result.