What is Consciousness and can it be preserved?

What does it mean to understand Chinese? The man, to an external observer, does. He can respond to appropriate Chinese input with appropriate Chinese output. Now it is claimed that internally he does not understand Chinese. But that is just shrinking the man before to some sort of ability within the man, and shrinking the system into the man’s brain. No difference in concept. It is just like if you implemented your computer version of the Chinese understanding program in hardware or firmware, so there is no “program” to be found. Really the same situation.
Now we have abandoned the idea that you can get semantics purely from syntax a long time ago, and there are all sorts of syntactically correct but semantically nonsensical examples. So if he was trying to demonstrate this, no one will argue. It probably was an issue back then. I’ve written lots of parsers in my life, all for languages trivial compared to these examples, but even there the syntax/semantics distinction is clear in the code and even in the parser generator.
I’m not sure I get the point of the chess example. When Turing wrote, someone using a program to play chess, implemented by whatever means, might have been confused with knowing how to play. I think few people make that mistake today, since most of us have used programs to extend our capability. I’m terrible at doing integration, and when I use Mathematica to solve an integral for me I do not suddenly think I have learned how to integrate. Today we distinguish knowing how to do something and knowing how to use a program which does something.

What do they mean by “physical system?” To get a patent on what is essentially an algorithm, you describe its implementation on a computer as a physical system, since you can patent hardware that implements something. But there is a big difference between a purely physical system that has the capability of running any program and the specific instance of it containing instructions. Things have gotten even more complicated, since hardware design languages look like software, and FPGAs can be programmed in hardware terms to implement a function. I’ll have to find time to read these things, but I have this sneaking suspicion that both sides might misunderstand computation at a fundamental level.
There is really no difference between hardware and software. Any program can theoretically be executed by hand with only paper and pencil. Similarly, any program can be implemented in hardware. Even the implementation of an instruction set can be all hardware, a combination of simple hardware and microcode, or all software, in a simulator. I’m interested in whether they grasp this.

Not at all - complexity in itself does not produce understanding. The understanding comes from someone with understanding who lends it to the program, or someone who understands understanding at a meta-level, who can program the computer how to learn Chinese in a similar way to how humans learn Chinese. These programs may be complex, but that is just a side effect of the problem. Complexity itself does not create understanding.

It is not clear what “you” means. Would behavioral changes coming from outside of the brain result in another “you” just like the brain in the jar would be another “you?” Any simulation of a brain would have to include the same type of inputs the normal brain gets, and not just the obvious audio and video ones.

Cite that isn’t another YouTube video of somebody talking?

That is how we humans communicate, what is wrong is talking. People pay thousands of dollars to listen to lectures in college. I think it is the subject you can’t handle.

Understanding Chinese just means grasping what a sentence in Chinese refers to in the real world, or what its equivalent is in a language that is already understood. It’s the analogue of ‘knowing chess’ in the ‘paper machine’ example.

But that’s the point of the whole exercise: the man implementing the paper machine, or the man in the Chinese room, or the Chinese room as a whole system, does not know how to play, or does not know Chinese. If you say it’s a mistake to think of implementing the paper machine program as knowing how to play, then it seems to me you should also hold that it’s a mistake to think of implementing the Chinese room process as understanding Chinese—the arguments are isomorphic in that regard.

The instructions—realized, for example, as voltage patterns—are physical themselves, and thus part of the physical system. There’s no difference made between hardware and software; typically, philosophers might consider some abstract model of computation, like a finite-state automaton, and consider as an ‘implementation’ a mapping of the states of the FSA to some physical system. The question is, when is such a mapping possible? The naive answer obviously is ‘always’ (at least if the physical system has a state space of high enough cardinality), and hence, any physical system can be seen as implementing any computation. But it’s a little more complicated than that.

But if that works, you can just take the resulting program, and port it to another machine, thus skipping the learning. But then, you can take that program, and implement it on a Chinese room. And then, you’re getting semantics from syntax: all the operator will ever do, no matter how you frame it, is to carry out syntactic manipulations on strings of symbols; yet nevertheless, he can come somehow to understand what those symbols mean.

Give me your best cite for your claim, then.

I think it would be self-evident.

How many times do I have to tell you-“Self Evident” does NOT mean “Evident only to one’s self”.

Well, technically, it’d be suicide. :wink:

Or, you go to sleep. At least in the elevator, your consciousness is maintained through the transitions. But once consciousness stops, the continuity is disrupted, and [or so I say] only memory bridges the gap.

I’m convinced that it’s true even ideally, philosophically … but it depends on what we define as “me”. Which is the question here, after all.

Precisely what I was thinking (except I’d say “implements” rather than “simulates”). Serle limits himself to a single locus of consciousness in the man. IMHO, that unspoken limitation pinpoints what Serle misses in his own argument.

Right. My prediction is that eventually, we’ll simply do it, and people will have to get used to treating mechanical beings as conscious, and eventually only zealots and philosophers will argue it.

Of course, when we do finally do it, it’ll open some serious ethical worm cans! “I Robot” scratches the surface.

Please don’t feed the trolls. Lekatt is free to make arguments. If he or she just sticks to baseless claims, well, we can ignore them pretty easily.

The bottom line here is that at any level where something is understood you can find structure where it is not. A dead brain - without working electrical impulses - understands nothing. A neuron understands nothing. Even some sections of a working brain don’t understand. A baby doesn’t understand Chinese - not even a Chinese one. He’s saying that it doesn’t matter if the room understands Chinese because a component of the room does not.
In the original model, the person could be easily replaced by a dumb robot which recognizes a card and which pulls out the appropriate responding one. Works just as well, but then there isn’t the shock that the human does not understand Chinese.

FSAs are much too simplistic to perform any computation - they cannot even simulate a Turing machine. No tape. And while a program is a physical representation, a process is a series of states over time, and is not a static representation that a program is. If the program is implemented in hardware, you still don’t have much until you have a hardware process which changes state over time, just like a traditional program does.
I’m not sure I buy that you can implement anything in a state machine with any bounded cardinality. If you could, I think you would be able to solve the halting problem - which is impossible. But I’d have to think about this more.

That you can port your program to another computer just shows that the entire system, software and hardware, is what is important. If you implemented an equivalent computer within the Chinese room and ported your program there, same thing. But you can’t port this program to the original Chinese room.
The only way for the original room to pass the test is to have it be a simulation (brute force) of a person who understands Chinese by providing the proper response for every possible state. As you noted, probably impossible, and in any case not the way anything interesting to examine works. It is like having a computer which doesn’t execute any programs but rather has a table of all possible program/input combinations, and outputs the proper response. Such a computer, which is of course impossible, in now way invalidates my claim that syntax does not imply semantics, since the semantics are encoded in the syntax.
Anyhow lex and yacc would barf at the amount of lookahead you’d need.

Perhaps you’re discussing a different version than the one described in the Wikipedia article, which says

So, the state/statelessness and FSA discussion is beside the point of the Chinese Room argument, as presented by Wikipedia.

Well, the problem is that there doesn’t seem to be any level on which there is understanding. You seem to be on board with this in the chess case: merely implementing the program does not constitute knowledge of chess playing. What’s different in the case of understanding Chinese?

True, but then, we don’t have any Turing machines in the real world, either, since we only ever have access to finite resources—thus, the computations we can implement are exactly those that can be carried out by FSAs.

The halting problem for FSAs is always solvable, since you have only finitely many possible states to run through; either the machine halts at some point, or it repeats.

I think I don’t get what you’re trying to say here. Why would a simple input/output mapping be the only way for the original room to carry on a conversation? In fact, as I said, I think such a thing could be trivially circumvented: just ask it ‘what was the last thing I said?’, or something similar. The answer to this cannot be stored in a lookup table, it seems to me.

And still, any program, any computation whatsoever can be implemented using the Chinese room, so if there is a computation that gives rise to understanding Chinese, then that, too, can be implemented. So, if we imagine that this implementation is taking place, what exactly now is it that understands Chinese? I think you agree with me in that it can’t be the man himself—he just sees strings of symbols, which he manipulates according to formal rules. So is it, then, the system itself? As I said, Searle anticipates this response and invites you to imagine the man himself incorporating the system, effectively carrying out the computations internally. In the chess case, you seem to agree with me that the analogous strategy does not give rise to ‘knowing how to play chess’ within the man, but not in the Chinese room case (and as I said, I have doubts about this myself).

But consider for a moment just how bizarre the ‘systems’-reply ultimately is: you’re ascribing understanding, a mind, to a collection of things—a room, a person, a set of cards, a rule book, perhaps some scrap paper, and so on. What would it be like to be such an entity? Is there some form of self-consciousness, a knowledge to the effect of ‘I am a room with a man with a book with some cards, etc.’? If not, how is the room to understand, for instance, references to it by its interrogators? And what other phenomenology is associated with it?

I think these are not easy questions, and the question of how manipulating symbols is supposed to give rise to understanding is the hardest of all. You seem to think differently—perhaps you see something I don’t. But IMHO, your replies so far miss the thrust of the argument: you keep coming back to issues of implementation, of hardware and software, of lookup tables and so on. But all of these things are completely irrelevant.

The question is simply: is there a program that gives rise to understanding of Chinese? If so, when implemented on a Chinese room, what exactly is it that understands? Where is the understanding? Where is perception in Leibniz’ Mill? And couldn’t you imagine an exact physical duplicate of the Chinese room, in which there is no understanding—in which every symbol-string is produced simply through a causal chain from the input string? But if that’s possible, what is it that gives rise to understanding in one Chinese room, but not in its exact physical duplicate?

The versions I read of were stateless, but we all agree that these are impractical, so I’m not surprised he added this. But if the room emulates a computer, then you can only conclude that the room doesn’t understand if you start with the assumption that no computer can understand. If any computer and program can, you can emulate it with the room. The guy is just the dumb cpu and irrelevant.

Walks like a duck, 's a duck. P-zombies are an utter absurdity.

Implementing as in running - no. Implementing as in writing - yes. You cannot write a chess program without knowing how to play chess. The chess program may do better at it than you, but you need to encode the rules.

I will happily concede that FSAs can’t understand Chinese. I might be wrong, but no one is doing AI with FSAs.

It could be if all possible questions were stored, but by original room I meant one with only a lookup table. No argument with this not being able to work.

Exactly. The man is the cpu, which does not understand.

A few problems with this. to continue with the CPU analogy, there is the concept of microcode, which consists of simpler instructions operating on bare machine resources which implements the normal instruction set of a machine. I worked on this for my PhD. While the CPU in the normal case is dumb, you could microprogram the CPU to “understand” Chinese if you can program the computer to understand Chinese.
Applied to the person, what does “understanding” mean. Since we cannot see within our minds, we don’t know for sure that our actual understanding of Chinese isn’t implemented in this way at a level below the conscious. How would he propose preventing the person from abstracting the instructions into an actual understanding? Our subconscious understands lots of stuff - mine knows how to program quite well - so he could understand it without knowing he did.
Chess is even clearer. If the instructions are just a series of moves, he is no more than a robot and doesn’t understand it paper or no paper. But when we learn chess at an early age, we are programmed with legal moves. That is understanding chess - not being good at it, but understanding it.
And it is odd to claim that programs which can beat grandmasters don’t understand chess. They understand it in a different way from humans, but they can describe why they made certain moves better than humans can.

But our mind is exactly a collection of things - neurons in our case - and programming in the form of learning. No, the room will not be able to be self conscious in the way you describe, but it is not programmed that way. We have evolved it. My old border collie was not conscious - he failed the mirror test - but he could learn, he could anticipate, and he could even abstract. Our subconscious minds can solve all sorts of problems without visibility into how we solve them.

He is assuming that the room can be made to look like it understands. If it cannot, there is no problem. If it can, then he seems to think the implementation matters, because he is concerned about the person. His main problem seems to be in accepting that you can have a room that understands with a person who does not as a component. Or a person who understands with some sort of core which does not.

There is no such program yet - but my understanding of the purpose of this argument is that he is claiming there can be no such program. Refuting his argument does not prove there can be, of course - you need an existence proof for that. Since truly understanding Chinese involves a lot more than symbol manipulation, it is not an easy task and not one we know how to do yet. He kind of trivializes it by assuming that the paper and pencil method can understand Chinese.
The big failure of AI has been to mostly work on things like “understanding” chess or “understanding” Chinese without working on what consciousness evolves. I’m not aware of work on program which self-examine, but this is what distinguishes our consciousness from my dog’s mind. To really work the Chinese room would have to answer questions about how it was feeling. If it could do that, in a consistent and believable way, would you consider it conscious - man or no?

Forgot all of this

What do you mean by causal chain? A program is a causal chain. As far as we know the human understanding of Chinese is a causal chain. Or are you back to the cards?

Real tricky question of interpretation. Knowing the rules doesn’t necessarily mean knowing “how to play the game.”

(Even more subtle, knowing how to play the game doesn’t necessarily mean knowing the rules! You could have an “educable” program, that makes moves at random. When it makes an illegal move, it is punished. When it makes a legal move, it is rewarded. In time, it will make fewer and fewer illegal moves…but it doesn’t necessarily know the rules. It will have “modeled” the rules from observation. But what it has deduced might not be the actual rules. For instance, it might imagine that knights must move “two spaces orthogonally and then one space in a perpendicular direction.” Alternatively, it might imagine that “knights move one space orthogonally and one space diagonally.” These are accurate descriptions of the real rules…but these are not the real rules!)

The wiki article gives an improvement of the “system” response: the “virtual” consciousness. A computer system creates lots of virtual things, like folders, documents, even virtual worlds you can (virtually) walk around in. The consciousness created by a program is a virtual one.

Regarding the difference between a virtual thing and a real one, sometimes it matters (weather simulations) and sometimes it doesn’t: we don’t care that a desktop calculator is virtual, as long as it gives the correct results for calculations.

My position would be that all consciousnesses are virtual. Your neurons don’t know they’re you, they’re just firing (or not) based on inputs. It’s the information processing that matters, not the substrate. Of course, we can’t prove that, at lest, not yet.

. As Voyager says, our minds are composites already. Not only do neurons not know they’re part of us, we don’t have any experience of neurons. The room would simply have interesting (or boring) conversations as a text stream. Or, in the model where computation is done in the room but inputs come from and are fed to a remote robot, the mind would tend to think that it’s wherever the robot happens to be.

One of the classic philosophical arguments about whether consciousness is a physical thing regards its location. As far as we know, there’s no specific location for consciousness (other than it seems to co-reside with the brain). The argument is, it must not be physical if it doesn’t have a precise location.

That argument flies in the face of physics, of course. I tried explaining that to the philosopy class full of people who weren’t science/engineering types and they didn’t buy it. Silly lit-types! Heck, we use imaginary numbers to correspond to physical phenomena. They didn’t get it. I admit I’m not conveying it here but I suspect y’all know what i’m talking about. In any case, if something is virtual, the location argument falls apart. It is what it is and interacts with whatever it interacts, wherever it does that, and it depends on (possibly distributed) physical hardware. But it’s not (just) the hardware.

Right; I was thinking the same thing. Another example: you could program a generic board-game learning machine that learns to play chess, without knowing that chess even exists.

The location of the “background knowledge” is an interesting issue for the Chinese Room argument. Searle insists that the knowledge resides with the experts who codified the knowledge. But … heck, they all died. Where is that knowledge now? Oh, it’s no longer “knowledge” … so then, what is it and how can it possibly work, so that the room can converse intelligently in Chinese?

The Chinese Room argument is an interesting one, but Searle’s replies to all the criticism show that he’s really just arguing his preconceptions, and believes in some kind of somewhat magic hardware that can produce consciousness, which can’t be implemented digitally (because “digital machines can only do syntax, and syntax is not semantics”).