Is Computer Self-Awareness Possible

This is just not right. There’s nothing about a lookup table, nothing about the smallest unit of parsing being a “sequence,” nothing like any of that. All that’s specified is:

  1. The instructions tell the man how to manipulate certain symbols.
  2. The instructions make no mention of semantics.

That’s it. There’s no substantive model of computing going on here. He’s saying plug in whatever model and whatever program you like. Make the Chinese Room work exactly like that. And there will still be no understanding.

For the third time in this thread :wink: what you’re articulating is called the ‘Systems Reply’ and it’s answered by Searle in the very work under discussion. If you want to say “sure the man doesn’t understand, but the room itself does (or at least might)” then Searle’s response is–make the man the whole thing. No room–just have the man himself internalize the entire ruleset. Now he’s the whole machine in question, and he still doesn’t understand Chinese.

Absolutely not. His specific claim is that the Chinese Room is something that does pass the Turing Test. “If,” he’s saying, “you think the right program can make a computer pass the Turing Test, and if you think passing the Turing Test is sufficient for understanding, then I’ve got an argument for you. Have a guy in the Chinese Room implement the very same program. Still passes the Turing Test. But there’s no understanding there. Therefore there is no set of computations that is in and of itself sufficient for understanding.”

This is functionally equivalent to a lookup table. It is true that he doesn’t explicitly mention sequences, but responses to specific symbols without context aren’t going to come anywhere close to seeming to understand Chinese. Thus, I’m assuming that the instructions for a certain symbol say if the previous n symbols were this, respond in one way, if that, respond in another way.

Nothing in my implementation is about semantics. You can implement a look up table in a grammar quite easily, thus pure syntax.

But this is all about whether computers can understand, and AI. So, this is definitely a model of the way he seems to think computers operate. Perhaps it is also trying to argue for a soul, that consciousness can not emerge from smaller non-conscious entities. But the model doesn’t work very well for the brain either.

If the goal was to demonstrate that consciousness can not arise from a purely syntactic processor, he wins. That is kind of trivial, though.

And probably for the third time, that wasn’t the point. The point was that the room - if it responded in a way that made it seem like it understood Chinese - would be identical to a room with a person who actually did understand Chinese giving the responses. Cards or no cards make no difference.
If the room did respond in a way indistinguishable from the case where someone who understood Chinese was in it, then you’d be asserting it did not understand Chinese with no evidence or support. It won’t, of course, but the constraints cripple the room, not the problem of understanding.
Remember, for the room to act as if it understood Chinese, it would have to be able to respond appropriately to questions about its internal “thought” patterns.
And, for those who don’t accept free will, how is the case where our responses are determined solely by external inputs any different from them being determined by the instructions?

Like I said, there is no chance this thing would pass a Turing Test, in part because it is not powerful enough to even implement a Turing Machine, and thus is not equivalent to any computer which can run any Turning-computable algorithm or heuristic. It is kind of like saying that because my simple pocket calculator can’t do integration my computer can’t either. His example isn’t a computer, and thus he can’t say it demonstrates anything about what computers can or can’t do.

(bolding mine)

That’s not only not necessarily true, it’s probably not actually true. The human brain seems to process sentences one word at a time. As it goes through the sequence, it seems to come up with various “hypotheses” about what’s being said, and narrows down possibilities as it goes. (In fact this seems to happen at at the even smaller scale of phonemes adding up to words).

In any case, if what I described is functionally equivalent to a lookup table, then everything any computer can ever do is functionally equivalent to a lookup table (and I agree that it is) and so it’s odd to say there’s something wrong with the argument requiring something functionally equivalent to a lookup table. The argument is about what computers can do, after all.

That was exactly how he and his contemporaries understood his goal. It’s utterly mysterious to me how all the big-shots in AI and Philosophy of Mind could have taken so utterly seriously an argument for a conclusion as “trivial” as you say.

Perhaps what’s going on here is that Searle actually ended up winning so hard that people working on AI such as you can’t imagine anyone ever having thought otherwise.

Or perhaps what’s happening is that since the time of Searle’s paper, Searle has lost so hard that the very meanings of “syntax” and “semantics” have shifted, such that a lot of things that would have been called pure syntax before are now being called semantics.

There’s very good evidence. Have the room internalized within the man. Now ask the man, “What you just wrote down in Chinese–what did it mean in English?” His response will be, “No idea. Was hoping you could tell me.”

The entity carrying out the perfect Chinese Room program does not understand Chinese.

On the assumption that computation consists purely in carrying out a program, it follows that computation is not sufficient for understanding.

Nothing about the setup of the Chinese Room prevents this. This can all be encoded within the ruleset contained inside it.

That’s a good question, one definitely invited by Searle’s line of inquiry–and one he himself expresses some interest in finding an answer to. On his view, there’s got to be something more, and he thinks the answer isn’t going to be found by studying computers but rather, probably, by studying biology.

Why do you think it can’t implement a Turing machine? What’s missing that would be required for that?

Voyager, I haven’t followed up on any of these links, but a google search seems to show that that there are a lot of people out there taking seriously the idea that [A computer is a purely syntactical processer](a computer is a purely syntactical processer).

Am I misunderstanding you when I think that you would disagree?

The reason I think you would disagree is–you seem to be saying all of hte following:

  1. A computer can be made to understand by giving it the right program.
  2. Understanding necessarily involves semantic processing, and not only syntactic processing.
  3. You can’t get semantic processing out of purely syntactic processing.

From the above two, it seems to follow that computers are not necessarily purely syntactic processors.

So then is that what you think? If so, why do you think it is so many other people are taking seriously the idea that computers are purely syntactical processors?

I was thinking the same - but it’s tough not to split up posts when responding

No I could not learn a language with no prior language if I was only supplied a text. On that we agree. But that is not the situation in the mind.

But of course you know what a ‘tree’ is, and you know what a tree is and you know what a “tree” is. They are all of the sensory information you stored in your brain when you perceived them. That’s what “knowing” is.

When someone else refers to the word “tree”, you have all of that sensory information available to you - it is recalled - you understand the word spoken because you found a mapping and ultimately the associated stored sensory data.

I haven’t read the paper yet, but based on my view of what is happening in the brain, I just don’t see ow it could be a problem.

Maybe the problem is if people want to assign a much more extravagant set of attributes to “understanding” or “meaning” - for me it just seems like those words represent proper functioning of the brain equipment - I have accessed the proper storage, or the set of ideas you just transmitted go together in a consistent manner - no internal paradoxes created (kind of, you get the idea).

I’m not sure you understand the example I gave you. I never mentally drew a picture of my tree.

While standing in front of my house, at some distance, and viewing my house and the tree behind it, I super-imposed a 2nd image of my house (approx) over the image of the tree in my visual field.

Do you think that any derivable info was knowledge we had already, regardless of the number of steps to get there or the brain power required to see an insight to combine things in a way we don’t have previous experience with.

I agree we aren’t making up info out of thin air - but if we store some info that lets us accurately derive other info, I would consider much of the derived “new”.

Well, we to some extent process one word at a time because we hear one word at a time, but we definitely do lookahead as you say. Lots of jokes work by breaking the hypothesis most people have built about how the sentence is going to do. However, this is clearly a semantic, not syntactic, exercise. There is some syntactic look ahead, in the sense that we expect a part of speech. If I say “I threw a running” it sounds odd, since you expect a noun or adjective after the article. But without the semantics of threw there is no way of choosing from a million different nouns. Ball, fit, game all work, and might be guessed depending on the semantics of previous sentences. Building is not going to be guessed, thought it is also a noun, unless you were role playing Superman or Godzilla.

Letters into words, which my phone does, is a bit more syntactic since the search space is a lot less, but when I do an acrostic I can fill in the rest of the word a lot faster if I have guessed the semantics of the reading versus the very beginning of the puzzle. There are some cases where the search space is very small, like th_.

No, the sentence completion you mention can’t be done solely with a lookup table. First, the table would be insanely big. Second, there is no way of choosing among the thousands and tens of thousands of completions. We clearly have heuristics which do a much better job than this. Computers with some sort of semantic understanding of speech can do a better job also.

Consider the sentence “I’ll be arriving at one.” You could add “o’clock” on the end, but that is redundant. In most cases adding “pm” at the end is also redundant, because the semantic context in which we hear the sentence says people don’t drop in then. However if the previous sentence is “The trip is going to take 15 hours” then the default changes drastically. How would you process that with syntax only?

As for computers, any computer which attempts to solve this kind of problem is going to build a semantic model of the world and the sentences. They’ve been doing that for years. Even when I took AI there were programs which could understand math word problems. The Eliza program, on the other hand, is an example of a pretty much purely syntactic approach, and it fools no one. Except perhaps Weizenbaum’s secretary and Searle.

Back then AI people felt pretty insecure, since the average person had never used a computer and the results of AI research wasn’t as prevalent. Pat Winston went on and on about Dreyfuss losing to a computer in chess. If you remember, he doubted that computers could ever beat people. Today I suspect the free chess program you get with Windows can beat most of its users.

I took course on computer language theory around this time, and I assure you that the meanings have not changed. I can’t assure you that Searle had a clue about the difference.

Having the rules internalized within the man is identical to having the rules written down. It is also equivalent to a robot with visual scanning capabilities reading the input, matching it to a rule, and then producing an output - without understanding it. The question is whether any of these three things would seem identical - from the outside - to a man who understands Chinese. You are forcing a man into a function of a robot in the original model, so of course he is equivalent to a robot.

If its responses are identical to those of someone who does understand Chinese, I don’t know how you can assert that. The components don’t understand, but then our neurons don’t understand either.

We’ve got person A who understands Chinese. We have him communicating with persons B and C. One of these people is talking to another person who does understand Chinese, the other person is talking to the room. If person A cannot distinguish the responses of B and C, despite best efforts, how can you claim that the room doesn’t understand Chinese as well as the person unless you are just asserting that rooms can’t really understand Chinese, no matter how it looks?

Basically you are saying that the room has to be hooked up to Borges’ library, storing every possible conversation. Sure, trivially you will not be able to distinguish a system which contains every possible conversation and responses to it from a person - but you clearly are getting into absurdity here. If AI researchers proposed this as the method of modeling understanding, they’d be laughed out of their universities. They don’t, of course.

I certainly agree that studying biology is the first step in understanding human thought processes. One big weakness of AI is that it has been implementing fairly simple models of understanding, or developing heuristics to do small subsets of the things that humans do. It has also been limited by computing power.

I already said - it can’t perform all the operations of a Turing machine, some of which involve writing and erasing a tape. it only comes with a pre-loaded tape.

Just give the man some effing tape! :p:p:p

You’re not saying that if we just gave the poor man a pencil and some tape he’d suddenly be able to understand Chinese given the right purely syntactical instructions for using that pencil and tape, are you?

If that’s not you’re point, then surely you’re missing Searle’s point.

Anyway, there’s this thing called the principle of charity–you want to ascribe to your opponent the best possible argument given what he actually wrote. If your argument against Searle is just “he didn’t give the guy a tape and something to write on it with,” then you’re just making a trivial point about the setup which can be so trivially patched up it’s hardly worth mentioning.

If you really think Searle didn’t intend the man to be able to use some tape and a pencil, then just give them to the man yourself and talk about the resulting scenario.

Then instead of saying it can’t be done with a lookup table, did you mean to say it can only be done with an extremely large lookup table?

How do you give a computer semantic understanding? Is it by means of a computer program? Can the man in the Chinese Room not follow exactly the same program without thereby understanding anything at all? If yes, then it follows that computation is not sufficient for understanding. If the computers are really understanding semantics, it’s not because of the program. It’s due to something else.

Or do computers get semantic understanding by some means other than a computer program?

Because you might have missed it, here is the text of an additional post I made right after the one you quoted:

Voyager, I haven’t followed up on any of these links, but a google search seems to show that that there are a lot of people out there taking seriously the idea that A computer is a purely syntactical processor.

Am I misunderstanding you when I think that you would disagree?

The reason I think you would disagree is–you seem to be saying all of hte following:

  1. A computer can be made to understand by giving it the right program.
  2. Understanding necessarily involves semantic processing, and not only syntactic processing.
  3. You can’t get semantic processing out of purely syntactic processing.

From the above two, it seems to follow that computers are not necessarily purely syntactic processors.

So then is that what you think? If so, why do you think it is so many other people are taking seriously the idea that computers are purely syntactical processors?

He’d have a fighting chance then, wouldn’t he, since he’d be able to learn the language and reflect his learning in the new writing - and, more importantly, writing about how he is learning the language, including abstractions of grammar and the like.
Now, I don’t know if a person could learn Chinese under these constraints. If a person can’t, a computer can’t either. You could argue that a person is using his intelligence to do it, and a computer could not. And I agree that an empty computer would not do very well - but we aren’t empty at birth, having been programmed by evolution with some basic language understanding, a la Chomsky. So, it is fair to give a computer the same break.
Of course I’m not saying the computer could learn the language, as this hasn’t been demonstrated yet. I’m just saying that the system has a fighting chance of learning the language.

From a while ago, you build a semantic net. For instance, you say that a able is defined as a flat surface with more than two legs. A table may be made of wood, metal or plastic. Using this network, if the computer is asked about a table made out of water, it can reject it. Like us, it needs to be able to modify the semantic net with experience - for instance it can be shown a table made of ice.
The semantics is not built into the program - it is built into an ever-changing data structure, which the program can follow using some rules. The program stays exactly the same as the data structure grows.

Is it understanding the semantics? With a rich enough network, computers have already been able to answer questions about childrens stories and solve word problems. I don’t think you get true understanding until you build in another level of analysis, where the program looks at itself, but I don’t think you can argue that anyone or anything can answer questions about the meaning of a story using syntax alone.

I read the beginning of Searle’s speech, and he seems to claim that semantics can be expressed by syntax based on something called proof theory. I’ll have to look it up. I got to the part where he gets very disturbed at the prospect of building a Turing machine out of pipes. I don’t understand why yet. He also seems to be hung up on 1s and 0s and assigning semantics to them. But we can express words as 1s and 0s, and the semantics we assign to them would not change if we could read fluent raw Ascii code.

I can’t answer until I understand what he means by the mapping of syntax into semantics. However, remember Dissonance? Though he knew how to program, he had a very simplistic view of what computers can do. Most people do. If one thinks they can only add, subtract and multiply, and follow very simple instructions, I can understand why one would think it absurd to claim they can understand. But if you understand how program flow can be affected in quite complex ways by the computers experience, you might think differently. It is clear that the concept of a computer understanding is very threatening to some people. It being demonstrated might be as revolutionary as evolution. That might be a reason also.

The Chinese room doesn’t understand Chinese but it can hold conversations in Chinese.

Since when do humans understand Chinese?

Searle posits that human brains have understanding because they have causal properties that computers don’t. But you can’t directly test this fraudulent room for the property because…because…something. It’s just assumed there’s a way to do it but we don’t know how, so instead we have to settle for this thought experiment. To me, the room always demonstrated that humans aren’t conscious either.

I don’t think this is right. Not all universal machines require the ability to erase, for instance – cellular automata, like e.g. this one, are capable of universal computation without ever ‘going back’ on their tape. In any case, Searle pretty clearly intended for his room to be computationally universal, and explicitly left the precise rule manipulation schemes open – otherwise, we would not have been able to draw any general conclusion.

As I said, I don’t think this follows. There’s no facility to communicate the understanding of Chinese to the English-speaking ‘part’ of the man (for that, you’d have to implement a program part that effectively translates from Chinese to English), but that doesn’t mean that there’s not some Chinese-speaking part that understands, in the way a native speaker does. It’s like if I executed in my head a symbol-manipulating program that corresponds to your thought processes – would that necessarily mean that I understood your thoughts? I think to think so would be a level confusion; rather, I would implement a simulation of you, having your thoughts, on my ‘lower level’ architecture, and would know about that no more than the Sheffer machine knows about AND or NOT.

Then you know what ‘quixopotl ai cuarantes y lonagas mochas’ means, it means ‘levieux alais-la ciliete ca-tue ilieges unais’. Which means ‘lebenich seian latage anich niese unzus’. This is exactly isomorphic. Your ears talk to you in the first language, your eyes in the second, and, I don’t know, your sense of touch in the third, add other languages as you see fit. So your sensory impression of something is the first sentence, and the second sentence, and the third sentence. Do you now understand what the sentence means?

So when someone else says the sentence ‘quixopotl ai cuarantes y lonagas mochas’, you have all the information of the other two sentences available to you – it is recalled – you understand the sentence. But of course, you don’t, do you?

My idea of understanding is, I think, a pretty conventional one: knowing what a word, an image, an idea means, what it refers to. The problem is, using the model you outlined, it seems impossible to ever get there, as every definition is ultimately circular and hence empty.

Then how was there an

?

I always wanted to try out that reasoning on the RIAA. I mean, if I create a compression algorithm that can extract some piece of music from some piece of data, then the piece of music is ‘new information’ in that sense. So nobody should be able to sue me for possessing such data that can generate music in that way… right? :stuck_out_tongue:

When it comes to the tree, absolutely, because you have associated the word “tree” with previously stored sensory information.

When it comes to your made up languages, I don’t know because I don’t know how they relate to the “tree”, or to other words that were learned through sensory experience. If there is a mapping or association to things that are known, then those other languages could be known.

It sounds like you are arguing that humans are unable to do one of these things:

  1. Store sensory information in “chunks” that somewhat map to individual objects we have experienced
  2. Associate other sensory information with those “chunks” of data - spoken or written words
  3. Use the spoken or written words to communicate

I understand the foundation of your point - that the mappings between 3 arbitrary languages with no connection to anything already known, can not be used to gain understanding of any of those languages.

What I don’t understand is what you think this has to do with human brains.

We have a mechanism for learning associations of sensory input, right?

We learn about our world this way, right?

We also have the ability to use this same mechanism to agree on and learn a set of standardized sensory stimulus (spoken and written words) for the purpose of communicating with each other, such that when person A communicates word N, person B will recall substantially similar sensory information and experiences.

So where exactly is the problem? Forget about the word “meaning” or “understanding” - where in this process are we either unable to learn or unable to communicate?

Either you think we can’t store sensory information, or you think that even though we can store sensory information and we can associate a word with it - that this is still empty and devoid of any value. Do you think one of these things?

I just don’t understand why you think it’s circular or empty and your responses aren’t detailed enough to make it clear. So far you have been responding by stating the 3 other languages, but I get that, what I don’t get is why you think your 3 language example is at all relevant - it doesn’t look at all like what is happening with humans and our learning mechanism - we have a mechanism that allows us to make new associations based on input arriving at our senses.

Because I was looking at the house and the tree.

The photons were bouncing off those objects and stimulating rods and cones and, etc. etc. etc…and I ended up with some representation of that visual information in my brain as I stood there staring at it.

This seems weird that I am having to explain how a tree arrived in my visual field when I said that I was looking at it, which makes me think you and I are making different assumptions about what is happening in general in this example (whether that’s how the brain works or not, just for the purposes of discussion).

So, does this response mean that you think anything that is derivable from the information stored already inside a human brain is considered part of that human brains set of information? That data derived is not new data?

For example, if a mathematician spends 5 years working on a proof just using information that is already stored in his/her head - the final proof would not be new information, right? It would just be a different view of the information already stored?

Whether we decide to call it “new” or “derived” doesn’t really change much - I agree that any derived information is certainly based on the basic information stored prior to deriving - but I also believe the brain has multiple mechanisms that assists it in deriving new information, one of which is modeling/simulation (which was the original point).

Here’s a concrete example of where we might use simulation:
If a car starts out heading north and takes the following turns, which direction will it be heading at the end: L, L, R, L, R, L, R

Note: To solve that problem, I just mentally imagined a car taking the turns, it was the most natural way for me to solve it, although I know I could have solved it by netting out different types of turns.

If you keep separating the man from the processing, you have artificially created this problem.

If the Chinese room can be any form of computation, then it can be of the form that both understands Chinese and understands English.

By focusing on the man the “system” has been broken apart and it’s the same as saying that a Chinese person’s mouth doesn’t understand Chinese.

The person does, but his mouth doesn’t.

Ok. And neither does any subset of a Chinese person’s brain.

The problem is the wording of this statement.

On the one hand it refers to “carrying out a program” as if the context was the entire system.

But the previous argument always points to the “man” that is merely a cog in the wheel.

Although you say Searle says “the room can be whatever computation you want” - ok great - but when taken from that perspective there is nothing left in the original argument that can be used to prove or disprove whether the room knows Chinese.

I can imagine situations in which it doesn’t understand Chinese (like the one Searle described) and I can image situations in which it did understand Chinese (a room that is exactly a Chinese person that understand Chinese, meaning when we say “room” we are referring to “Fred” (not his real name) from China).

The only thing Searle has accomplished is to point out that not all Chinese rooms understand Chinese.

I tend to lean towards something like this also. I think that maybe, just maybe, consciousness is less magical than it appears. But I’m also pretty open on this topic.

The question is, how does this association come to be? I really don’t know how I can be clearer. In a representationalist model, information arriving from the outside is presented to the self, the mind, the I, whatever you want to call it. The problem is that this information, in order to become interpreted, must refer to some thing, some event, some state of affairs in the real world. But all that is ever presented to the self is bits, or strings, of information – sentences in different languages – and relations between these. So the self is in the situation of having sentences in different languages, and associations between these sentences; and as we have established, from there, it is impossible to discover the meaning of these sentences – what state of affairs, event, thing in the real world they refer to.

The problem you have is that mental states seem completely transparent, i.e. as if you had through them direct access to things in the outside world – but in simple representationalist accounts, this transparency is inexplicable (well, I can’t think of how to explain it, at any rate). But seeing something, hearing something, feeling something or experiencing something any other way is in fact the same thing as being given a sentence – it’s all information, represented in different ways. Like there is no language that is immediately meaningful, there is no mental content that is, either.

Perhaps think about input from the senses as literally being comprised of strings of 1s and 0s – what the eyes see, the ears hear etc. gets digitalised, then sent to the ‘processing center’. So, in the mind, what arrives are binary strings – one a representation of what is being viewed, one a representation of what is heard, etc. How does the mind figure out the meaning of these strings? It is the same problem you have when given the sentences in unknown languages. You say you don’t know what they mean because you don’t know how they relate to the tree, but neither does the mind – because that knowledge would necessitate it to have the capacity of ‘seeing through’ at least one of those bit-strings, and see the tree lurking behind it (the way you seem to see through your conscious state directly at the outside world behind it).

This is the key problem. How does the mapping to things that are known come to be, when ‘knowing things’ (i.e. knowing that ‘tree’ means tree) already necessitates the existence of this mapping?

I’m not arguing that humans are unable to do anything of that, I’m arguing that most theories that depend on some kind of representation of conscious content before the conscious self fail to give an explanation for these capacities – notably, in this case, for the capacity to understand meaningful pieces of information.

But how? Picture a blank slate mind (for simplicity), the mind of a newborn child just about to open its eyes/ears to the world for the first time. What it receives is a string of bits from the eyes, and a string of bits from the ears, and other strings of bits from wherever else. How does it make sense of them? How does it recognise one string of bits as the image of its mother, another as the sound of her voice, yet another as her smell? It has no experience to relate to, nothing known to make connections with – and in this picture, it can never gain them. Everything starts out as unknown to it as the meaning of the made-up language sentence, and it stays that way.

Of course, you might then argue that perhaps, a baby arrives in the world with a set of pre-formed concepts (and in a way, this is likely to be the case), conveyed to it by a sort of genetic memory. But this just pushes the problem further down the line of ancestry: at some point, these mappings between inner and outer world must have originated. But how, if each such mapping depends on an already pre-existing one?

It’s literally the same problem as sitting in a room, where you get, through various conduits, the three sentences delivered to you. You in this case represent the self, the conduits the information channels to the eyes, the ears, and the nose, and the sentences the data coming in. There is no way to discover the meaning in this case. The only possibility would be to refer back to something you already know the meaning of – but this implies that somehow, you (or perhaps some distant mammalian ancestor) must have come to know the meaning of that something, which you of course can’t have done, since in order to do that, you must have had to back-refer it to something the meaning of which you already know… And so on.

Again, you’re fooled by the apparent transparency of mental states. It’s not the case that looking at the scene containing your house and tree simply projects a picture onto your visual cortex for you to admire – again, if that were the case, vision would be inexplicable. You ‘draw’ the image of something you view to yourself as much as you draw anything to yourself (which, according to my thoughts on the matter, is of course pretty much not at all).

The question is, if you compare the heights of two objects in your mind with each other, and you genuinely do not know the height of one of those objects – then how is the comparison possible? It’s a single equation with two unknowns: (house height) * x = (tree height), where only (house height) is known.

It’s indeed the case that a set of axioms and rules of inference – in principle! – contains all the information about the derivable theorems. It’s just not very ‘close to the surface’. What a mathematician does (without aiming to be disparaging) is akin to decompressing a compressed data file – this too might take quite a while, depending on the compression used, but few people would consider the decompressed result ‘new information’.

Of course, that information may be useful in different ways – it’s kind of hard viewing a movie by just looking at the source file, so the decompression does make it easier to bring the relevant information to the fore. Similarly, even though a set of axioms implies all derivable theorems, the answer to the question of whether or not a given theorem is derivable from those axioms is often far from obvious, and knowing it – in the sense of having a representation in which that knowledge is ‘close to the surface’ – may be useful in other endeavours.

See, I just cancelled every (L,R) pair… Though in order to find out that the remaining L means ‘West’, I still imagined a compass rose.

I’m not denying that there is a strong sense of ‘having imagery in your head’, per se – I just want to argue that just because it seems that way, there need not be any actual pictures present; and indeed, assuming that there aren’t makes the task of explaining how consciousness works much easier (in my opinion, it upgrades it from ‘impossible’ to ‘faintly imaginable’).

The question is whether there is such a form, and the conclusion the argument is aimed at is that no, there isn’t.

It refers to the internal state (sensory input) that was triggered by the external world. There is no need to discuss (during our exchanges) the external world other than the fact that it is triggering sensory information.

Our internal storage and computation mechanisms act on information that is internal to our brain, it arrived there due to external actions, but everything going on is only concretely tied to internal signals. This isn’t to say we can’t have abstract thoughts that relate to the external world, but our learning about the external world and our representation of the external world is based substantially on a projection of various aspects of the external world onto our sensory inputs.

They refer to our internal sensory information.

If we get lucky, and we have a properly functioning mechanism (due to evolution), our internal information storage, mappings and transformations will substantially match the signals arriving at our senses due to the outside world under similar conditions.

In other words, our internal processing will be consistent with the external signals arriving at our senses - if we are walking towards a wall, the changing signals in our visual senses will be consistent with the expected change in signals based on years of experience with those changing signals over time.

I just don’t see the problem.

We have sensory input. Over time we learn how to interact with our world. It becomes second nature.

Again, maybe you are elevating “meaning” to a level above what I would.

“Meaning” doesn’t exist as an attribute independent of the storage of the information.

The strings represent the information, the consistency with which they can be used to interact with the world successfully (e.g. accurately determining that walking into the wall will produce pain and will not allow “me” to get where I want to be) is the way in which they acquire “meaning”.

The baby may be a blank slate (for arguments sake), but it’s brain has built in capabilities for learning.

At first it makes no distinctions between objects in it’s visual field. But due to the built in capabilities to extract different aspects of a visual scene (e.g. straight lines, etc.), the built in capabilities to store and modify information, the built in capabilities to transform information and predict, it begins to isolate the patterns of data that happen to coincide with mom in the real world.

The only bootstrapping required are the built in capabilities to acquire information in the way we do.

I assume there is a lot built in, but whether they are concepts or capabilites only, not sure.

No, it is completely different.

The only way it could be argued to be the same is if the three languages are all projections of an external world, and the messages arriving in the three languages are all consistent from a time perspective, and that our actions cause the languages to provide new input that is consistent with the external environment after it has been altered by our actions.

If you had all of this, then sure, send me the three languages and I would use them instead of the three senses being replaced - and I would be able to interact with the external environment just as if I had those senses and I would have units of information that “represent” things in the external environment in the exact same way.

Exactly my point. Your math computation method wouldn’t work.

Whereas a method of adding artificial objects to a scene, such that the artificial objects can be kept substantially of the same size due to the capabilities of our brains can be used to more accurately estimate.

You could do it on paper, right? If the brain can do it on paper, why can’t it skip the steps of translating physical movement to the hand and just do it in the brain?

Whether there are actual pictures present or not wasn’t my point - my point was that the mental manipulation takes place and that it can utilize the projection of an image internally as a powerful computation method.

I know, but the conclusion is not compelling.

If the “room” is Fred and he knows Chinese and English then the room knows both.

Basically we have 2 extremes:

  1. The room has a guy in it that doesn’t understand Chinese and responds based on being told what to respond.
  2. The room is a guy that understand Chinese

Nowhere in the argument was the gap bridged.

What do you mean by “separating the man from the processing”? To my mind I was insisting on not allowing objectors to make that separation. I was insisting on finding ways to consider the man and the processing inseperable. You want to separate the processing from the man and place it in the room as a whole? Then I insist, in order to explain how the argument works, that we need to have the room internalized so that the man isn’t separated from the processing.

Here’s a quasi-Searlian argument:

  1. Suppose computation is sufficient for understanding.
  2. A human mouth performs all the computations necessary to constitute understanding of a human language.
  3. Therefore, the human mouth must understand human language.
  4. But the human mouth does not understand human language.
  5. Therefore our initial assumption is false. Computation is not sufficient for understanding after all.

Where does this argument go wrong? In line 2 of course. No one thinks the human mouth performs all computations necessary to constitute understanding.

Now here’s Searle’s actual argument:

  1. Suppose computation is sufficient for understanding.
  2. The man in the Chinese Room performs all the computations necessary to constitute understanding of Chinese.
  3. Therefore, the man must understand Chinese.
  4. But the man does not understand Chinese.
  5. Therefore our initial assumption is false. Computation is not sufficient for understanding after all.

The first argument was trivially shown unsound, because premise 2 was not something anyone would accept.

But everyone assents to number 2 here–because Searle says if you believe 1 is true, then just put whatever program in the room you think is needed to make 2 true. If you believe 1 is true, then you believe there is some such program.

Do you think there is a subset of a Chinese person’s brain which carries out all of the computations sufficient for understanding Chinese and yet doesn’t understand Chinese? (You can’t believe that actually–it’s a contradictory statement.)

That’s all Searle needs–he’s trying to prove that computation isn’t sufficient for understanding. If there can be a Chinese Room that doesn’t understand chinese (and if it’s true that Chinese Rooms as described by Searle are computationally equivalent to Chinese understanders*) then computation isn’t sufficient for understanding.

*This is the assumption everyone seems to grant Searle but which I disagree with.

Take a look at the second argument I outlined above. If you think it’s unsound, then you disagree with a premise or you think there is reasoning from premise to sub-conclusion or to conclusion that is invalid. Which premise or piece of reasoning do you find faulty, and why?

When I read it isn’t “sufficient”, I guess I was reading that as “computation alone can not create understanding, there is something beyond computation that is required”.

But, if “sufficient” really just means that having some computation that looks like it understands Chinese isn’t really enough to understand Chinese, the computation needs to really understand Chinese before we can say it understands Chinese - then I am ok with that.

These are the types of rooms I think that can exist (based purely on computation):

  1. Does not look like it understands Chinese AND doesn’t understand Chinese
  2. Does not look like it understands Chinese AND it does understand Chinese (it’s just confused in it’s responses)
  3. Does look like it understands Chinese AND doesn’t really understand Chinese (it got lucky)
  4. Does look like it understands Chinese AND does actually understand Chinese

I seems like Searle said: type 3 exists, therefore type 4 can not exist

#2 and #3 are both potential problems.

By inserting some agent or actor to be the entity that performs a bunch of computations you have immediately constructed a scenario that is problematic when you get to point #3.

Because of the way that is worded, the following is being communicated:
The perspective and context is from some actor or agent that is “externally” performing computations and then you make a claim about that persons “internal” state regarding understanding Chinese.

However, if you replace #2 and #3 with the following, then all is good:
#2 The machinery called “the Chinese room” performs all of the computations necessary to constitute understanding Chinese
#3 The machinery called “the Chinese room” understands Chinese

It would even be ok to reduce the requirements, but then #3 becomes unknown:
#2 The machinery called “the Chinese room” performs some computations and the result appears to understand Chinese
#3 The machinery called “the Chinese room” may or may not understand Chinese

That’s exactly the right way to read it.

Is computation sufficient for understanding?

If so, then there is a set of computations C such that carrying out C is, all by itself, enough to make something understand.

Does the man in the Chinese Room carry out C?

If so, then the man in the Chinese Room should understand Chinese.

Does he?

If not, then it turns out computation is not sufficient for understanding.

Why? Because we have a thing (the man in the room) which carries out the right computations (C) but doesn’ t understand Chinese. Computation alone can not create understanding. There is something either other than, or beyond, computation that is required. Computation is not sufficient for understanding.

As I said before, here’s Searle’s argument in a nutshell:

  1. Suppose computation is sufficient for understanding.
  2. The man in the Chinese Room performs all the computations necessary to constitute understanding of Chinese.
  3. Therefore, the man must understand Chinese.
  4. But the man does not understand Chinese.
  5. Therefore our initial assumption is false. Computation is not sufficient for understanding after all.

Do you disagree with a premise (line 2 or 4)?

If not, then do you think that the argument’s reasoning from these premises is invalid? Does 3 not follow from 1 and 2? Does 5 not follow from 3 and 4?