What is Consciousness and can it be preserved?

Between the computations. In the case of playing the game, both the lookup table method and the dynamic computation lead to the same results, i.e. the same game being played. But for the conversation program, they don’t—in the lookup table case, there is no genuine understanding, while in the dynamic computation case, there is. Essentially, that’s because in the case of the game, there is only a syntactic level, so all that can be done with dynamic computations can be done via a lookup table, while in the conversation program case, there is a semantic level which, however, according to most of the responses in the thread, fails to be exhibited by the lookup table, but is present in a ‘proper’ computation.

Ok, that’s what I thought you were saying.

The difference between dynamic and lookup in both cases is not visible externally, but is visible internally. Furthermore, just because game X internal doesn’t exhibit understanding doesn’t mean game Y internal can’t. We don’t really know for sure one way or the other, but I think it’s safe to say we aren’t positive that it can’t happen.

It seems this line of reasoning adds up to “human brain has understanding and other computation that we have created so far doesn’t”. But it doesn’t seem like you can extrapolate any further than that.

Are the inputs deterministic or probabilistic? The rules that evolve are a function of the inputs.

Only if you define the system as not understanding.

Our brains only manipulate chemical and electrical impulses. Do we really have the concept of things, or do we simulate having the concept of things? Yes, Searle’s constrained sytem does not have the concept of things - but Searle’s constrained system will never fool anyone into thinking it understands or is conscious.

The person does not do all the work in the room - the person who wrote the rules does most of it. That’s like saying the computer does the work and ignoring the programmer.

All stored program computers are adaptive. Mostly at a much simpler level than an AI. In order to reduce download time, I have a system that goes and gets the files on a remote computer, figures out which are new, and then writes an ftp script to download those files. My intern was stuck about how to solve that problem, but those of us who have written compilers understand this stuff better than most users. In my case the script I execute is strongly dependent on the input data, and cannot be practically determined in advance.

First, a systems view in no way assumes there is parallel computation. I was thinking of a program and the cpu as the system. The effectiveness of parallel computation depends strongly on the amount of interaction required by the pieces. The brain is a parallel computer, in a sense, since different functions are distributed across the brain. Understanding a sentence is a terrible application of parallelism - just about as bad as pregnancy. Parsing in general is not something good to parallelize.
For complex things, there is organizational understanding that goes beyond individual understanding. No one in a microprocessor design team understands everything - but the team as a whole does understand it, in that it can create and implement the processor. And answer questions about it. There are specialists, and lots of communication.

Of course it is theoretically possible to simulate any finite sequence of outputs given a set of inputs with a table. I can easily write a tic-tac-toe program which does exactly this. Heck, you could write a lookup table based chess program, if you neglect the fact that the table size is larger than the number of atoms in the universe. And to convince someone that the room understands Chinese, its table will be bigger than that - since you could ask the room to play chess. So it is as good as impossible.

You seem to be assuming a simplistic idea of computer game. KI know of some games that randomly construct new rooms etc as you enter them. The game must examine the structure of the room (thought) it has just created to respond appropriately.
While you can wonder how the lookup table simulation would work, you might as well use magic, because they are just as likely. You’d get a combinatorial explosion. Think about how it would answer questions like - what did you five sentences ago, 6, 7,8, 9. Trivial for a system which records its answers and can read them, not so simple for a table. So I don’t buy at all that a lookup table can implement a dynamic computation. Let’s say we have one computing pi, and we have it in an infinite loop which keeps going. Can you implement that by a table?

I think most would hold that there’s nothing going on ‘internally’ in the case of the game, and thus, there’s no difference between lookup table and computation. Consider the case of multiplying 2 and 3: whether it’s done by looking it up, or by calculating, that doesn’t have any influence on the result. But when holding a conversation, it seems as if it does.

Not sure I’m seeing the relevance; I’m simply claiming that a non-adaptive architecture can compute everything an adaptive one can. Do you disagree?

I’m perfectly happy to grant the system understanding. The question is, however, if I internalized the system, where is the understanding then? Because it seems obvious to me that I won’t understand a lick of Chinese.

That’s very likely, but it’s not known for certain, and lots of people think differently (see my examples above).

What do you mean by ‘Searle’s constrained system’?

So that’s the argument that you need consciousness to produce consciousness. But then how does consciousness arise?

Or am I misunderstanding you? In the case of the internalized room, would you hold that the person having internalized it possesses genuine understanding of Chinese?

But there are certainly universal computers that are not adaptive—like for instance the rule 110 CA; it applies a fixed, simple rule again and again, and can nevertheless compute anything that can be computed at all. So if consciousness can be computed, then it should be capable of giving rise to consciousness. Do you disagree?

But that doesn’t matter. Have a hundred billion persons simulate one neuron each, if you like; or cut a brain in twelve parts, and have each person simulate one. If you’re proposing the systems reply, you’re saying that a collection of things—person, cards, rulebook, scrap paper—gives rise to conscious understanding not present in just one of them (i.e. the person). The twelve people example simply underscores this.

But do you think there’s a conscious image of the microprocessor that exceeds just the images of all the people combined, i.e. would you actually hold that there is a conscious awareness of the sentence if all 12 people think of their word?

As long as there’s only finitely many inputs and outputs, then of course you can implement it with a table (in theory, if not in practice).

I don’t see how 12 people is the same thing as 12 neurons. Can an individual person in your example express something without consulting a number of the other 11?

What do you mean by non-adaptive architecture? Does it or does it not include modifiable stored programs?

You will to the observer. And you will when the rules get incorporated more deeply - just like when we learn to drive, and the skill moves from our conscious to our subconscious minds. Lord Buckley even had a bit about this.

Aha, it is about the soul. Nothing in your examples disprove this, and anyone who claims that there is something beyond chemistry and electricity needs to show what it is.

Anything below a general purpose computer implemented by the room.

In the sense of a system that we program. We might well be able to develop a system with the proper feedback which evolves consciousness, but that is not the point of this exercise. As for whether the person understands Chinese, see above.

If you look more closely, you will find a definition of computable which does not include meta-language. People can “solve” the halting problem but not within the constraints given of a specific system.

What I was saying is that the parallel computation in no way gives anything more power than a serial computation, and so is irrelevant. And not even very useful. You seemed to be saying that you can fix the complexity problem by a parallel computation - and that is not always true.

Is the organization conscious? Perhaps in some sense it is - it has a memory, it examines its own actions, it responds to stimuli, and it even reprograms itself based on results. But it certainly understands - more than any person. If the organization was just a bunch of people not talking to each other, then no.
If the 12 people split up the learning of Chinese characters so that each understood a subset, and then worked together to process input, wouldn’t you say then that they understood Chinese as a group?

Not if the table computes pi. In any case, the argument is that strong AI is impossible because the Chinese Room creates an absurdity. That is kind of bogus if the room itself, the initial conditions, is an absurdity.

No, they can (and must) communicate, but the communication itself cannot carry semantic information, as none of them has any understanding of what they’re working on.

I’ve given the example of the CA 110: the rules are fixed, only the initial state varies.

Well, but that’s not what matters, of course. And I’m very skeptical of somehow just ‘coming to understand’ by familiarity—how could I, no matter how much I have internalized the rules, ever come to realize that 苹果 means apple? This seems utterly impossible to me.

But internalizing a skill is something very different from understanding a language. There’s nothing that ‘push right pedal to accelerate’ means other than ‘push right pedal to accelerate’; there’s nothing but the rules to get to know.

No reason to assume anything beyond chemistry or electricity; it might simply be the case that the mind, in order to function, needs hypercomputation.

The only thing ever in any thought experiment implemented by the room was a general purpose computer.

I’m not sure how that relates to what I said? Can you implement a program giving rise to consciousness (or understanding of Chinese) on the rule 110 CA, or can’t you? If not, then in what sense is consciousness/understanding computable?

No, I wasn’t aiming to imply that the parallelization solves anything, merely that you could, if you so choose, implement the program in such a way. And as I said, I find it hard to believe that twelve separate experiences somehow ‘unify’ into a single awareness.

Understanding in the sense of producing valid Chinese utterances in response to Chinese prompts—yes. Understanding in the sense that there exists some entity for which it is a certain way to appreciate the content of the sentences—to know what the sentence is about—I don’t think so.

No, the argument needs merely conceivability, not physical possibility; whether it’s absurd does not depend on whether it’s ever built. And as I said, the table can compute pi to any finite accuracy; obviously, you can’t just ‘let it run forever’, but neither can you with any ordinary computer.

Communication of semantic information does not imply understanding. That an apple is red, round, and good to eat is semantic, and certainly can be communicated - and has been - in contexts where we would not say there is real understanding, for instance in story “understanding” programs which can answer questions based on a story.

But it’s original state can effectively be a program, and rewrite itself, so I’d say this can be adaptive. However if it does not have any input, it isn’t really adaptive, just something that traverses a very large number of global states.

I wonder if humans could understand language with only symbolic clues. How would you come to understand language if locked in a room with only a bunch of books with no illustrations. If you had some semantic context (though illustrations) it would be relatively easy. We know what an apple is though having seen one, or by having a base vocabulary which allows us to describe one. But how to understand “red” and “round” using symbols only? I already said that semantic background is necessary for understanding, this is yet another example.

if you’ve seen someone learn a language, you can tell when they change from translation, which is akin to manipulating symbols only, to thinking in the language, which is understanding it.

Far from clear that these are possible, and further still that we have this capability.

If you say so. Then, the reason the room does not really understand Chinese is that a component of the room does not understand Chinese. The room itself contains the semantic knowledge necessary to even pretend it understands Chinese. (I’m assuming you are not saying it is a general purpose computer constrained to map input symbols to output symbols.)

We have two cases. It is a general purpose computer, with the full capabilities of a computer as generally understood. We postulate that the computer seems to understand Chinese, and passes a Turing test for Chinese. Despite this it is claimed that the computer does not really understand Chinese, because the component (the person, taking the role of the CPU) does not understand it. But this does not differ at all from the case that instead of a person we have a processing unit which can perform the person’s limited function. So, if a system appears to understand, does it not truly understand if its components do not understand?

The second case is something less than a general purpose computer. It cannot modify its state. but it can move through a very large number of states. Now, say some of its inputs are instructions on how to build and simulate a Turing Machine of a Rule 101 CA. In this case we might be able to “understand” language, but the premise that it is not a general purpose computer is violated. Thus, this weaker than gp computer case cannot pass the Turing Test, cannot be mistook for a system which understands Chinese, and we see that the premise that it can leads to a contradiction.

Now the first case does not prove that the understanding in the premise is truly possible, only that it does not lead to a contradiction.

But merely producing the words ‘red’, ‘round’ and ‘good to eat’ does not entail having a concept of an apple; it does not entail knowing the meaning of the word apple. Semantics is not just the connections between different bits of syntactic information, but what ‘apple’ refers to.

In what sense does the initial state rewrite itself? Certainly, it’s changed during the evolution of the CA, into different successive states, but as soon as you know the initial state and the rule, you know the complete CA evolution.

This is generally known as the ‘symbol grounding problem’, and what you’re saying later is related to the notion of ‘semantic externalism’: in order to truly know the meaning of the word ‘apple’, we need to have interacted, in some form, with an actual apple in some way or shape. There’s a reply to the Chinese room working along these lines, arguing that the Chinese room could not develop true understanding, but a robot, acting in the world, could, through the same ways we do.

The obvious problem is that whatever senses we use to probe the world, we only get information through them—thus, what the robot receives from the world is just a stream of data, which is again just so many more symbols; so this is not, after all, different from the Chinese room itself.

Of course, but in learning a language, one typically starts from a language one already knows, or has a way to ground the symbols through direct acquaintance.

Oh, I think it’s highly unlikely that it’s the case, but that was just to underscore that it’s at least compatible with physical explanation that Searle’s argument could be right, establishing that there’s no strong AI (for instance, there’s explicit solutions in general relativity, such as the Malament-Hogarth spacetime, in which hypercomputation is possible).

Yes, but that’s just again the systems reply. And again, we can consider the system to be internalized.

I’m not sure I understand. If it can simulate a Turing machine, how is it not a general purpose computer?