Is Computer Self-Awareness Possible

Of course computation isn’t sufficient for understanding. Understanding is necessarily a subjective event. Computation is objective. The ‘computation’ needs be experienced by a being for understanding to take place.

A computer, as I understand how it works today, does all its computations in a sort of mental vacuum. They aren’t experienced- it is pure math not in reference to the machine itself. It is not ‘in reference’ at all- within the computer itself all of these computations do not serve as a reference for the machine’s perspective, but rather only the users’. If we were to figure out what it is that constitutes ‘a being’, and construct our computer within that, perhaps computer self-awareness is possible.

See my post #279.

I explicitly state that both #2 and #3 are problematic (really #4 as well due to it’s use of the word “man”). Can you respond to my post #279? I would like to see your response to the last 2 sections, the one in which I enumerate the possibilities and point out that Searle said not #3 therefore not #4, and also the last section in which I described exactly what was wrong with #2 and #3 and how to fix them.

My post was a response to the entirety of your #279, though I didn’t explicitly quote the parts you mentioned.

Searle is not saying “type three exists therefore type four can’t exist.” Searle explicitly believes type 4 can exist. He thinks we’re type four. And he thinks we could very well be able to build other entities of type 4 someday.

I almost said that, rather, he’s saying “type three exists therefore something other than computation is required.” But type three is a red herring. His point isn’t that the Chinese Room behaves just like an understander of Chinese. Rather, his point is that the man in the Chinese Room follows exactly the right program that’s supposed to make anything following it understand Chinese.

The man follows the program, but doesn’t understand. Hence, following the program wasn’t sufficient. Computation is not sufficient for understanding.

You said the following two lines from my statement of Searle are problematic:

By calling 3 “problematic” do you mean that it’s problematic becuase it follows from 2 and 2 is problematic, or do you mean 3 doesn’t follow from 2 together with 1? (1 was “suppose computation is sufficient for understanding.”)

By calling 2 problematic, you’re probably saying the man in the room doesn’t perform the necessary computations for understanding Chinese. Why do you say this? Do you think there’s a program such that following that program is, in and of itself, sufficient for making the thing following the program understand Chinese? If you do think that, then on what basis do you claim the man in the room isn’t following htat program? It’s a difficult question to answer since it is basically stipulated that the man in the room is following whatever program you like. You have to be saying that somehow it’s impossible for the man to follow the program you have in mind–that’s the only way I can think of to deny a stipulation in a thought experiment.

But why would it be impossible? If it’s impossible for semantic reasons, then Searle wins–you’re acknowledging that understanding can’t be given by a purely syntactically definable entity such as a computer program.

If it’s not for semantic reasons, then for what reasons? Why is line 2 quoted above problematic?

You said line 4 (that the man in the room doesn’t understand Chinese) is also problematic, because it talks about the man. But the whole argument talks about the man. Again, Searle is arguing that:

S: For every program y there is an object x such that x can follow y without understanding Chinese.

That is yet another way of saying that computation is not sufficient for understanding.

Searle’s ‘x’ is the man in the chinese room. His opponent is allowed to supply whatever program she wants for y.

It’s not a distraction to focus on the man. The man is crucial–the man is an entity which is following y, and yet which is not thereby understanding Chinese. Since there is an entity following y without understanding Chinese, it follows that y is not sufficient for Chinese understanding after all. And since y is stipulated to be whatever program you like, it follows that no program is sufficient for understanding Chinese.

In my opinion, understanding resides in the combination of state and transformations to that state. That man is neither of these, but especially not both of them.

A man or a CPU alone that turns the crank to cause the transformation to happen, or is the thing that follows the rule to update state X with value Y, is not the thing that can have understanding. It is a cog in the wheel and does not represent the most important part of the system.
The man does not represent “computation”. The man represents the electricity that allows the computation to happen.

That’s why I keep saying the “man” is the wrong focus and Searle is not achieving much by focusing on the man.

Said another way - you could have a program such that the sum of state and transformations does understand Chinese - and this program could be executed by a man using some medium for storage and calculation - and the system would understand Chinese but the man wouldn’t.

Is “state and transformations to that state” a form of computation?

If no, then you agree with Searle.

If yes, then are there computations that can’t be performed by just any turing-complete system, but only particular turing-complete systems?

If so, then how can that be?

If not, then do you think the man in the room is turing complete in the sense that he could follow any program you care to give him?

If so, then how can you think the man is not in the right “state and transformations to that state”? The man can follow any set of “states and transformations of states” you care to give him, can’t he?

Of coures, Searle agrees with this. But, says Searle, the CPU and the man in the room are following exactly the right program. Since they’re following the right program, but fail to understand, computation isn’t sufficient for understanding.

Can you clarify why you think that the CPU and the man in the room aren’t following the right program? What does it mean, to you, to follow a program?

Remind me what it is you think is going on when the whole “room system” is internalized to the man himself.

Why, in your view, is the electricity in a computer not undergoing computation, in particular, the particular computation that is involved in following the program being run on the computer? What is it to follow that program, such that the computer as a whole has it, but the electricity inside it lacks it?

Yes.

No.

Yes.

When I refer to “state” I am referring to the “state” of the machine, not the man. As far as we know the man has no state - he is a dumb executor of instructions.

But yes, he can carry out the instructions, meaning he can perform the transformations against the internal state that are described in each step.

Wrong.

The man or the CPU is causing the computation to happen.

But as I stated previously, computation=the state and the transformation of that state - it does not=the piece of the machine executing the transformation.

It’s the math, not the pencil writing down the math.

To follow a program means to perform the actions listed.

The problem I have with that is the continued separation of the man from the system. Just because the system is inside the man, doesn’t make the man represent the entire system.

RaftPeople, I don’t think we’re closing any ground here. To me, it seems patently obvious that the situation of the self in the mind receiving data from the senses is exactly isomorphic to the situation of a man in a room, being supplied sentences in unknown languages, thus if in the latter situation it is impossible to decipher the meaning of the sentences he receives, it must be just the same in the former.

Just picture it as nerve endings supplying the results of interaction with the outside world, either firing or not firing, according to their firing conditions being either met or not. The ‘internal state’ being triggered is just such an activation pattern; so is any sensory input. So if any new input refers to our ‘internal sensory information’, it is a pattern of symbols, an unknown sentence, referring to another pattern of symbols, another unknown sentence. Best as I can tell, you seem to think that given enough such mappings, understanding somehow happens – isomorphic to saying that given enough unknown sentences and ways to translate between them, eventually their meaning becomes clear.

Or alternatively, you seem to think there are certain stimuli that possess an intrinsic quality – you mentioned pain as an example. But pain is also just a pattern of nerve endings firing – an unknown sentence. There’s nothing intrinsically painful about the information arriving at the ‘processing center’, it’s information, a string of symbols, same as any other. Of course, this may trigger reflex responses – recoiling from the stimulus, crying out, etc. But this is hard-wired, it could be just as well implemented in a computer that possesses no mind at all. And of course, the notification the self gets about this response is another string of symbols, another unknown sentence (or set thereof).

All that the self ever receives about the outside world are strings of symbols – patterns of neuron firings, if you will. If it understands any of them, it must have, at some point, learned how, and thus, have been in a state of non-understanding previously (either in the history of the individual, or in the history of the phylum, if we allow for some sort of genetic memory; though of course, one could argue that what the genetic memory tells us is just yet another string of symbols). It’s the transition from non-understanding to understanding – equivalent to the transition from not knowing the meaning of the sentences to knowing them – that’s left unaccounted for in such a model of the self.

This is at the heart of Searle’s argument as well. You can identify the man with the self in the picture I outlined here. But if, as you agree, computation done by the man does not enable him to understand Chinese, then how can computation done by the self enable it to understand Chinese? In other words, how can you – as opposed to the system comprised of you and ‘the room’ – get to learn Chinese? If you are your self, and carry out these computations, then you must conclude that you can never actually learn it – your self + the computation, i.e. the room, would know Chinese, but you, i.e. your self, wouldn’t.

(And as for the tree – let’s say I draw it on graph paper. How do I know how many squares high I draw it, if I don’t have any idea how high it is? If I decide at an amount of squares randomly, how do I decide how high then to draw the house? My ability to draw it depends on my knowledge of its height – at least, if I wish to draw it in the correct ratio. And if I fail to do that, the drawing won’t tell me anything.)

Okay, I’m going to have to say something a little anti-climactical here: I think there’s something to what you’re saying. It touches on issues in extended cognition which I’m very interested in and which I do think are relevant to questions about the Chinese Room.

I tend to think that people can be the bearer of representational states the physical substrate of which extends beyond the boundaries of their own bodies. Hence I have no problem saying the man himself is in the relevant states and going through the relevant transformations. On the other hand, for me to say this is for me to accept the hypothesis of extended cognition, and that’s very controversial so I can’t blame you for not going with it.

Meanwhile, in the “internalized” version, you say the man “doesn’t represent the entire system”. What you need to be saying here, I think, to be consistent with what you said before, is that even in the internalized version, the man isn’t in the relevant states and going through the relevant transformations. This seems a little harder to defend to me, but maybe not impossible. When a normal computer runs a calculator program in addition to all the other programs it’s running at the same time, do you think that computer is undergoing the states and transformations relevant to the “calculator” computations? Or do you think there’s a sense in which the computer isn’t the entity performing those computations?

The transition from not-understanding to understanding takes place as we interact with the environment and get sensory feedback from that interaction, and build a coherent model of our environment such that we can operate within it and successfully predict responses to our actions.

Over time we learn and store sensory information and rules regarding our own interaction with the outside world (e.g. feeling something hot usually leads to pain). We build up enough of this information that we are able to navigate our way through this world, find food, etc.

To me, that represents understanding. I know you agree we do that, so do you think that low level description does not really add up to understanding?

If you think that is not enough to call it understanding, maybe you could describe what the brain would need in addition to that to call it understanding?

“I” am a system. Of course “I” can learn Chinese. You can’t separate me from the system and still call me “me”.

The man isn’t a system. The man is 1 piece of a system. The man merely performs the next step in the program. The man isn’t simultaneously aware of all of the current state of the system like “I” am. Sure the man can go retrieve some portion of state as required for the next calculation, if instructed to do so - but that is very different from being intimately integrated with all of the state and the computational machinery.

“My ability to draw it depends on my knowledge of its height – at least, if I wish to draw it in the correct ratio.”

So you are arguing that nobody can reasonably accurately draw a picture that substantially matches the ratios of objects in the scene unless they know the actual height of those objects in advance? I am looking through your words trying to find where you modify that statement to something reasonable and I’m not finding it, so I have to assume what you wrote is what you meant.

Your ability to draw it only depends on the ratio, not the height. You can’t possibly be arguing that I just intuitively know the height of every object in my visual field.

This is the procedure:

  1. Look at a house with a tall tree behind it
  2. Draw on plain white paper an approximation of that scene such that the house and the tree are in approximately the same ratio
  3. Now draw the outline of a second house of approx the same size on top the first
  4. If the 2nd house doesn’t exceed the height of the tree, then draw a 3rd and a fourth, etc.

Questions:
a) Now, given that you know the height of the house, do you not see how this process can help you estimate the height of the tree with greater accuracy than simply taking a guess?

b) Have you never actually used methods like this to attempt to increase accuracy in the absence of proper tools?

c) If you agree that there is some benefit to these types of tricks, my next question is, instead of doing all of this on paper, why not just do it in your head? I know I do these things successfully and I assume most people do.

I read the link, and I would agree somewhat that we draw arbitrary boundaries, I would also say that the more detailed analysis of things like that we do, the more difficult it is to draw boundaries. Things always seem simpler on the surface. So then I think - let’s just build something that works instead of trying to figure out where those boundaries are.

I think it hinges on what exactly it means for the room to be “inside” the man. I was viewing it as a decoupled container, merely carrying the thing that actually does the real valuable work. But I was also aware that maybe Searle or you mean something different when you say that.

Interesting question.

Does the computer “know” everything the calculator knows, from the perspective of being a calculator? It certainly has access to all of the information and is the thing performing the calculations.

I’m not sure how to answer the question or even how to map it to the room inside the man issue. Part of the problem is that I automatically assign certain qualities to the man, and when the room is placed inside him I do it in a specific physical manner by default, which influences my responses regarding that situation - whether I am picturing what is intended is difficult to say.

But that sensory feedback is again only comprised of symbol strings, of neuron firings, of unknown sentences. Honestly, do you really not see the problem?

This statement is isomorphic to: unknown sentence 1 is often correlated with unknown sentence 2. You can draw up an unlimited amount of such correlations; but I fail to see how this will eventually lead to understanding the meaning of any of these sentences.

I’ve told you what I mean by understanding – knowing what real world thing, event, state of affairs a given string of symbols refers to. It’s just that, in a pure representationalist account, this is impossible to achieve.

The man is a system, too – it’s just that he as a system performing a computation, apparently, isn’t sufficient for understanding, while the system that encompasses the computation, i.e. the room, is. But your self, your inner homunculus, being given the rules of Chinese, corresponds to the man, not the room. It doesn’t understand Chinese, therefore. So if you think that you can learn Chinese, you must conclude that you are not that homunculus – that you are not your self, in other words. The problem is that any attempt to clearly delineate a ‘self’ in the mind yields such paradoxical tangles – which I take as an indication that this is not the right way to proceed.

Well, do an experiment. Get a sheet of graph paper, and draw a line of, say, 13mm length. Then, draw a second line.

How long is the second line?

If you don’t know that length in advance, then obviously, that question has no definite answer. But it’s the same thing you’re proposing, drawing the house (whose height you know), and then the tree (whose height you don’t know), in order to find out the height of the tree. But you can’t extract that information if it wasn’t put in in the first place.

So, you are arguing that when you look at a house and a tree, even if you didn’t know either of their heights - it would be impossible to draw a picture that retained the approximate proportions of what you are looking at?

Are you saying that is impossible to do?

If you think it is possible, then tell me which of the steps I listed a human brain will get stuck on.

Take this painting, by one of my favourite artists, René Magritte. Its title is, very aptly, ‘The Human Condition’. To me, it essentially illustrates the way how we perceive the world: it appears, almost, as if there is no canvas at all – as if we are directly ‘looking through’ it at the scene beyond the window. This is what I mean by the ‘transparency’ of conscious states: if appears as if, in some way, we have direct experience of the outside world.

But this is an illusion. In fact, it is an illusion captured in a painting by Magritte as well, the famous ‘Treachery of Images’. At first, it seems paradoxically, asserting that what is manifestly a pipe, actually isn’t (‘ceci n’est pas une pipe’). But upon a moment’s reflection, if becomes clear that the painting is actually just expressing a truth – it indeed isn’t a pipe; it’s a painting of a pipe. It’s a collection of colored pigments, arranged in a certain shape. It could not be farther from a pipe, as anybody who tries to fill it with tobacco and light it will quickly figure out.

But then, neither is the tree on the picture ‘The Human Condition’ a tree, nor are the clouds clouds, the hills hills, etc. The canvas does not open up a view to the outside – it completely blocks it! Only our familiarity with the pictorial symbols on the canvas has fooled us into thinking about the ‘transparency’ of the canvas, where it is in fact completely opaque.

In fact, had we never seen a real tree, we would not recognise the thing on the painting.

This is the crucial observation, for the self in the mind is in the situation of in fact never having seen a real tree! All it has seen are paintings, and while it can of course form associations between them – the things on this painting look similar to the things on that painting, etc. – this doesn’t – can’t! – help in making a connection to things in the ‘outside world’.

If this still doesn’t help, think of the canvas as not portraying a ‘direct’ representation, but rather, a coded one, containing the same information. Something like this horrible hackjob. Pretend that the 0s and 1s code for the original picture. This is what the self gets from the outside, and it is all it ever gets. And with only this, it is impossible that the self ever figures out the meaning, i.e. the real-world referent, of the code.

Are you arguing that you can draw a line of unique and definite height in the experiment I suggested?

Of course, if I know the relation between house and tree, I can draw them in proportion – but the drawing tells me nothing new, as I knew that relation going in.

Just because you envision a particular model of perception and understanding, and see a problem, doesn’t mean everyone will see that same problem.

You seem frustrated that I don’t simply accept your view, but that is the nature of debating and exchanging ideas. I am happy to do it in good faith, but clearly, something I accept as ok you see as a problem, but it’s not clear why we have different views.

I think that is the nature of our disagreement.

You think understanding requires some connection to the real world beyond simply interpreting and reacting to our senses in a way that is consistent with the real world.

I think understanding is just the extent to which we interpret our sensory signals, map that into our internal model of the world based on past experience and are able to successfully predict the results of our interaction with the environment. That’s it.

I agree that trying to clearly delineate a sharp boundary between “self” and the rest of the system seems problematic.

But I also think that doesn’t rule out a higher processing portion of the system that brings together lower level information and calculates/interprets.

I think it’s messy.

Given that you didn’t tell me anything about the length to draw, either absolutely or in relation to the already drawn line, or anything else, of course not.

But your example is not similar to what I was saying at all.

Ok, great, so we agree you can draw them in proportion.

Questions:

  1. Now, do you think you can draw additional houses of approximately the same size on top of the house you have already drawn?

  2. Assuming you answered yes to #1, if I tell you the height of the house, do you think that you can use the drawn stacked houses to approximate the height of the tree?

I missed this post.

I personally don’t have a problem with the fact that our access to the real world is through our senses. Where we clearly disagree is in the use of the term “meaning”. When I use that term I don’t need some special connection to the real world other than through our senses.

As a matter of fact, the situation is much much worse than you describe. A “real” tree has so many physical attributes that we can’t and won’t ever sense (e.g. it’s ultraviolet representation, it’s magnetic representation, etc.) that it’s almost ridiculous to even talk about the “real” tree.

All we get is a tiny bit of sensory input and discussing the “meaning” of the “real” tree, IMO, doesn’t actually make any sense.

It’s more that I reject a particular model of perception, understanding and conscious experience that’s very common, because I see that problem. And I think that you follow that model, at least to some extent, so I’m trying to explain the problem I see.

Not anymore than I think understanding a sentence requires knowing what it describes, refers to, is about, etc., i.e. the same criteria of understanding one would require in the ‘Chinese room’ argument.

I think the same thing, but the model of the self being supplied with sensory data is unable to provide such. Our ‘internal model of the world’ comes into being through our sensory data, which is just strings of symbols – so all we have is strings of symbols referring to other strings of symbols, which, in the sentences example (and in the Chinese room argument), you don’t seem to think is sufficient for understanding, but here is supposed to be, somehow.

If you tell me the height of the house, and I know the relation between the house and the tree, I don’t have to draw anything, because I already know the height of the tree. That’s the point.

Well, I’ve defined what I mean by ‘meaning’ a couple of times, so why don’t you give your definition, and we see where we disagree?

To me, it makes as much sense as discussing the meaning of this sentence does, because they’re not different concepts.

Really, the key problem is that you agree that in the case where you are given bits of text in unknown languages, and only know the relations between these bits of text, you can’t discover the meaning of the texts, i.e. can’t learn the languages they’re in; but for the self, being in the exact same situation of being given bits of data and relations between these bits of data, something else apparently happens. You keep saying that it refers to the ‘internal model’ of the world, or to past sensory experience, or anything like that, but these are just bits of data – bits of unknown texts that have been collected in the past – as well. I just fail to see how you can agree with me in one context, and disagree in the other.

Of course, you might try to turn things around and argue that knowing the relation between these bits of data is all the understanding we’re ever gonna get, and in some sense, this is a reasonable stance – we can certainly act consistently, knowing enough about these relationships. If we know that the string ‘tree’ is associated with the particular set of colored blobs tree, we can say “tree” whenever we see a tree, and so on.

In the same vein, one might become a prolific translator between unknown bits of text, knowing that, for instance, sentence A (too lazy to go back and dig up my examples, or cook up a new one…) translates to sentence B, and so on. This is in fact essentially how machine translation works.

But the punchline then is that if you think that this constitutes understanding, you should also believe that the man in the Chinese room can come to understand Chinese in exactly the same way. Because that’s exactly what he does: match symbols to symbols according to some rules. Same as the self, matching, for instance, ‘tree’ to tree.

There is a huge difference between sensory input and text in an unknown language. Sensory input is intimately tied to the real world and our interactions with it. We can move our arms and adjust our model based on the expectation of sensory feedback and the actual sensory information that arrived.

To understand the text, you need to understand it’s mapping into your lifetime of sensory experience and model building - because that is what the text ultimately refers to and is a place holder for. But there is no mapping between any of the 3 languages to your internal model.

The text is not intimately tied to the real world. It refers to the model that has been built up over a life time of experience, but you have no way of discovering that mapping.

Scenario #1
InternalModel -> SensoryData -> RealWorld

Scenario #2
Languages A, B and C ->(LINK BROKEN) InternalModel -> SensoryData -> RealWord
We have a mapping between the languages A, B and C, but we don’t have access to the mapping between the languages and the internal model of the real world.

And because languages are based on our internal model of the real world, you can’t figure that out.

But our original internal model that was based on sensory input does make sense of it’s interactions with the real world because it has access to the mapping through interaction and feedback.

No. A text in an unknown language is a string of symbols with some mapping to a state of affairs, an event, a thing in the real world; sensory input is a string of symbols with some mapping to a state of affairs, an event, a thing in the real world. Think about what you receive, say, via your eyes: light impacts the retina, exciting different photodetectors to a different level; this is translated into an electrochemical code, which is then relayed to the central nervous system. That the code here is some sequence of electrochemical signals doesn’t make it any different from a case in which the code is a sequence of letters, or words, or sentences, or 0s and 1s – between each of these, there exists a mapping, translating one into the other; they’re effectively and essentially equivalent.

The same is the case for sensory input.

As is the case with, say, a newborn ‘blank slate’ baby receiving its first sensory inputs.

The problem is, how do you build up such a model, if any understanding of information from the outside world relies on the pre-existence of a model? That’s blatantly circular.