Is Computer Self-Awareness Possible

I don’t know what “definable purely syntactically” means. Syntax is the structure of a string, semantics is the meaning of that string.

Exactly. The problem with machine translation is that when words have several meanings, translation from one language to another which might be syntactically correct becomes semantically meaningless if you pick the wrong definition. You cannot figure out the right definition syntactically, since both might be nouns, say. You need to understand the semantic context of the sentence (and perhaps the reading) to do it correctly. As I mentioned, this has been understood for a long, long time.

I review papers written by non-English speakers, and I quite often find a mistake that clearly came from using not quite the right word in context, so this isn’t just a computer problem.

If syntax is the structure of a string, then something is definable purely syntactically when it is definable entirely in terms of the structure of the strings that make it up.

A good example would be dealing with a problem its programmers hadn’t anticipated. For instance, if you’ve got a robot that can make its ‘own’ decisions on how to navigate complex terrain, but it hasn’t been told about rain - and then it finds itself in a downpour.

A grammatically correct paragraph is definable in terms of the the structure of the strings that make it up. I don’t think a “meaningful paragraph” or a “well-written paragraph” would be.

In terms of software, a program that parses is. I’m not sure about compilation, because of typing rules. A program that meets a set of requirements is not.
If one could prove that a program is correct through pure syntactical analysis, he would soon become richer than Bill Gates.

Sure it is: the paragraph you just typed is meaningful and well-written, and also definable purely in terms of its syntax. It’s an A, followed by a space, followed by a g, followed by an r, etc.

The requirements on the program (which involve our intentions towards it) can’t be defined in terms of the syntax of its strings–since the requirements aren’t part of the program itself but also involve part of the world outside it (namely our intentions). But the program itself (as opposed to its requirements) is defined purely syntactically.

Half Man Half Wit, I have been thinking about my recent mental activities to see if mental imagery was used to compute something that I didn’t already know the answer to.

This is my recent real life example:
I was talking on the phone to someone about having a tree cut down in my backyard and they wanted to know how tall it was. I walked out in front of my house and it’s considerably taller than my house so I didn’t even have a very good guess at first.

So I mentally projected an image of my house on top of my house to see how far it goes up the tree. This house didn’t look exactly like my house (it actually was nothing more than a non-detailed space holder), but the region of space it was occupying was substantially similar to my house. I could see there was a substantial amount of tree left, so I tried it again, but this time it seemed like it was a little taller than the tree. My conclusion was 65 to 70 feet tall.

The tree guy ended up looking at it and estimating before we talked about my estimate and guessed 70 to 75 feet based on his experience.

Assuming the tree guy is pretty good at estimating based on his years of experience, it would seem my mental gymnastics did help me to reach a reasonable conclusion that I did not know in advance.

Not sure if this isn’t a bit of a level confusion – a machine that simulates a mind is, at the user level, functionally indistinguishable from a mind, or else it wouldn’t simulate a mind. What you do when you ‘type certain keys’ is in effect leaving the user level, I think. But I’m not sure I understand accurately what you mean; perhaps we should fix some jargon in order to communicate better, which I’ll attempt to do in a minute.

Yes, that’s more or less the gist I got from your governed/following distinction.

OK, now for some concepts and notation; none of this is likely new for you (or most others), but I think it’s useful to establish a common baseline. What I mean when I talk about a (Turing) machine or a computer is essentially a computable (recursive, which intuitively means that it can be implemented as a finite series of elementary steps, i.e. an algorithm) function from a set of inputs to a set of outputs – let’s use the set of binary strings for both, just for definiteness. So a computer is something that maps binary strings to binary strings according to a certain prescription: C(i) = o, where i represents the ‘input’-string, and o represents the ‘output’. This is a completely general definition of computation, though we usually don’t think of it that way – the outputs we typically associate with computers are far from simple bit strings, but in all cases, there exists another computer C’ such that C’(o) is something like a set of pixels, i.e. a graphic, or anything else we typically ‘get out of’ a computation. We’re also not used to supplying the computer some input and then letting it run – rather, we interact with the program. But it’s possible – and usually convenient – to regard a program with a complete set of inputs as a separate program in itself, so that essentially, what we do when we supply input is in a sense completing the program, and a program with different inputs is a different program – a different string ‘i’ for the machine to act on.

Now, computers have a peculiar property, called commonly universality: a sufficiently complex computer has the ability to emulate any other possible computer. This means that for any computer C there exists a string c that on some universal machine U implements a simulation of C, such that U(c,i) = o whenever C(i) = o. Thus, for any input i, U equipped with c will react identically to C – this is what I mean when I say ‘functionally indistinguishable’; what I call ‘user level’ is limited to supplying input strings. Of course, if you have ‘bottom level’ access to U, you can change c to c’, and hence, change the way it reacts to input; this is what you do, essentially, if you ‘press certain keys’ (like ‘esc’ to abort the simulation, etc.). In real life, user level and bottom level – or generally, the multiple lower levels present in even the simplest actual computers – are often not very clearly delineated, and level crossings happen so frequently that we generally don’t notice them. For instance, right now I’m interacting at the user level with my browser, but in a minute when I get bored/stuck again, I’ll leave this level, switch briefly to a lower level to change between programs, and then enter the user level of my pdf reader.

So, in brief, what I think about when I think about a computer simulating a mind is a universal machine U equipped with a simulating string m such that U(m,i) = o whenever M(i) = o, i.e. the computer reacts to any input in the same way as a mind does (where, since we’re still discussing binary strings, we assume there exists a machine that can translate from o to any ‘output’ a mind might typically give – actions, speech acts, thoughts, declarations of eternal love etc.). An important corollary is that in general, you can’t tell on which level you are. Without explicitly provided pathways that allow level-breaking, there’s no way to tell, for example, the precise architecture of the system you’re using, or what language the program was written in, or even if you’re operating on a ‘virtual’ or ‘real’ machine. Taken to the extreme, if we live in the matrix, there is no way to find out. In this sense, a simulated human mind, if such a thing is possible, would seemingly have to be indistinguishable from a real one – or else, it wouldn’t really be a simulated human mind at all.

I think a better distinction could be made using the notion of a formal system. Recall, a formal system is a set of symbols, a grammar that governs how to form valid expressions from those symbols, a set of axioms, and rules of inference, with the pleasing property that there is no need for any meaning or interpretation of those symbols in order to carry out derivations – it’s all possible in a perfectly dumb, ‘mechanical’ way, just shuffling around symbols according to fixed rules.

There’s an obvious correspondence between formal systems and (Turing) machines: one can equip a machine with the set of axioms and the rules of formation, and just let it run, and it will happily generate the formal system’s theorems in order of appearance. Conversely, to all outputs a computer may generate, there exists a formal system in which those (and only those) are theorems.

But still, there is a difference between carrying out derivations within a formal system, and being the machine that instantiates that formal system, or at least so it seems to the intuition. Certainly, the ontology seems different: in the first case, we have an active agent, actively carrying out symbol-manipulations in order to, say, generate valid Chinese sentences, and in the second, the formal system provides a passive definition of the agent himself. One concrete difference that might be pointed to to support this conclusion is that in the first case, the agent might easily decide the Gödel sentence of the formal system he is using, while in the latter, it is of necessity undecidable.

This is already approaching thread-killing length, so I’ll snip it here… There are some more interesting things one might say about levels, one is that on the level we’re at right now, the question of machine self-awareness seems downright preposterous: certainly, a function from binary strings to binary strings can’t be self aware! This is, I think, really the gist of what all that those ‘anti-simulationist’ thought experiments say, from Leibniz’ mill to the Chinese room, etc. But they forget that more is different – speaking of self-awareness on this level might make no more sense than speaking about wetness on the molecular level description of water, or speaking of an arm on the cellular level. This isn’t an anti-reductionist stance so much as merely the observation that on different levels, different sets of questions and answers provide intelligible descriptions… But anyway, I’ll leave those thoughts unfinished for now.

This is, as I understand it, already a major departure from what Searle is saying: which is as I understand it that essentially, no amount of rules, no matter of what kind, suffices to create understanding – an intuition which I share, in the sense that I agree that no amount of rule-following will allow the man in the room to understand Chinese (whether this means that there is no understanding at all, though, is a point of contention).

But maybe I’m misunderstanding you – do you think the man in the room, or the room (the ‘system’) possesses understanding?

The reason I keep going on about this is that the Cartesian theater problem is something that’s very hard to get rid off. In terms of the preceding post, when confronted with the problem of whether a ‘function from strings to strings’ can be self-aware, people generally divide into two camps: those which regard the formalization as a reductio of the possibility of machine consciousness, and those who, typically guided by the idea that after all, a brain is a machine, too, get via one route or another to the point where essentially the bit string is interpreted in ‘meaningful’ terms – in your picture, this is roughly where a simulation of one kind or another is being created, I think. I believe neither is a viable route, curiously for the same reason: the former embraces an out-and-out dualism, where the mind is simply made of different stuff, while in the latter, dualism arises implicitly in the genesis of meaning (what Dennett calls 'Cartesian materialism). Some in the latter camp believe they can escape this conclusion through back-referring to things in the outside world, creating a sort of ‘denotional’ theory of meaning, but they forget that there are no things of the outside world within our minds. I.e. the idea is to make the word ‘tree’ have meaning by having it refer to (‘point at’) an actual tree, but of course, there are no trees in the mind – merely images, ideas of trees, which would then have to refer to an actual tree, of which there are none in the mind etc.

The problem is that to all appearances, the two alternatives are exhaustive: how else to create meaning if not by association with something meaningful? Nevertheless, I think a middle road is possible, and actually necessary.

Yes, I always thought it interesting that our perception operates in such a ‘scientific’ way: generate hypotheses about what may be seen, and then use the data to knock them down one by one. It’s much more efficient than actually building up an image from the raw data, and hence, quicker, with the downside that it yields to false positives – but a nonexisting tiger seen and fled from is less of a problem than an existing tiger missed!

That’s not what I mean. I fully appreciate that data is stored in compressed form – and that’s really what ‘rules’ are: efficient methods of data compression --, and that decompression, or rule application, happens in order to predict scenarios. The knowledge we have is implicit – procedural – rather than explicit – stored in the form of actual data. But model building, in the real world, serves one purpose: see the effects of rules we don’t precisely know how to apply. In the mental world, this purpose is negated.

Explicit model building would be like a computer first creating an image out of the stored data, then projecting it onto a screen, then filming that screen using a camera, and using the data that comes from the camera in order to make decisions. I don’t think this is a limiting view – you can view terms like ‘image’, ‘project’, or ‘film’ as metaphorical for any general process which creates a model out of raw data, then ‘perceiving’ (i.e. bringing it to attention) that model, and make decisions on that basis.

The key point is that the raw data itself, using our procedural, implicit knowledge, would suffice to reach the same conclusions, make the same decisions. Just as there is no need for the computer to project an image onto its own screen, there is no need for us to place a model – thought of however abstractly you wish – within some sort of ‘inner workspace’ in order to see how it reacts to manipulations.

Nevertheless, it certainly seems as if this is precisely what we’re doing (as in your other post, which certainly describes a sequence of events I am familiar with, as well)! My tentative answer to this is that it seems that way to us in precisely the way that our field of vision seems to extend through blind spots – there is nothing there that actively creates content to fill in the blank, it’s just that missing content is only noticed if there is something that looks for content.

Now suppose you have no ‘self’, you’re a zombie. Then, there is nothing that could look for an internal representation, a model, of whatever. So since no questions are asked, you never realize that you don’t actually have any inside model. But what you do have is a collection of – raw and procedural – data that can be queried for how a visualization (or other kind of model) of whatever would look like (feel like, sound like, be like etc.). So, in the manner of the zombot interrogating himself to determine whether or not he is conscious, you query that data for properties of the visualization: is it round, big, small, black, reflective, beautiful, etc. So now you have a collection of data of how it would look like, to the self, if you had one, to look at your internal model, if you had one. This new data again can be queried in order to gain ‘higher level’ states, iteration by iteration.

However, if you actually had a self, you could not be fooled: you could look inside, and see that there is no actual model there. But you have no self, and no model – just the knowledge of what it would be like, up to a certain level of self-interrogation, of introspection, to have a self and a model. However, this means, since you can’t tell the difference, it would seem like you have a self; it would seem like you had a model. But in the mind, how things seem like and how things are are indistinguishable (my example was that it can’t just ‘seem like’ you had a headache – if it seems that way to you, you actually do have a headache). Thus, the model and the self bootstrap each other, spurred on by introspection – there is no self that looks at a model, anymore than there is a model to be regarded by a self; rather, both determine each other, in the same way a river’s flow determines the shape of its bed, and the shape of the bed determines the river’s flow.

It’s not a confusion–I’m intentionally arguing that these levels you’re referring to aren’t relevant. What determines whether we’re at the “user level” or not is not anything about the machine itself–it’s about the relation between the machine and the user. I as a user can decide I will count certain machine interactions as relevant to its mindlikeness, but that’s just what I as a user am doing–it has nothing to do with what the machine itself is ontologically speaking. Just because I can decide to ignore the non-mindlike interactions, that doesn’t make the machine a mind. (I can ignore all the times my table doesn’t act like a mind as well, but I can’t claim on that basis that those very few times it acts like it has a mind that it has a mind.

I’m suddenly called away but I’ll answer the rest when I can. Hopefully the above explains some of what I’m saying.

Here is my sentence:
A grammatically correct paragraph is definable in terms of the the structure of the strings that make it up. I don’t think a “meaningful paragraph” or a “well-written paragraph” would be.

Here it is in a syntactically equivalent form:

A rapidly obese hamster is edible in terms of the skeleton of the saxophones that make it up. I don’t vomit a “colorful gas” or a “poorly-cooked building” would be.

Meaningful now? The syntax diagram of my second sentence is identical to the first, subject to my errors.

As for the program, we are worried about if the program meets a given set of requirements, not the structure of the requirements. There is no way to know that syntactically. In fact, one of the big problems with requirements is that syntactically correct English language requirements have major semantic problems, in being self-contradictory or incomplete. How do you find that out syntactically?

Ah–I think you’re talking about the syntax of the words, while I’m talking about the syntax of the letters. Your example of a syntactically equivalent string preserves word-level syntax, but not letter-level syntax. I’m not sure exactly what significance that miscommunication (?) has for our conversation so I’ll just throw the ball back at you. :wink:

As far as I can tell, you’re agreeing with me–I said the requirements can’t be defined purely syntactically, and it looks like you’re further explicating the very same thought. Have I misread you?

I’ll wait for a more fleshed-out answer, but in the meantime, I think I disagree here. To me, the machine selects what interactions you get to see on the user level, and only if such is explicitly provided can you go beyond that level. For instance, the two primitive ‘machines’ I defined back on the other page, the ‘Sheffer machine’ and the ‘Boolean machine’, could be ‘stacked’ in the way I described – the Boolean machine defined using the primitives of the Sheffer machine. You as a user might be presented with a panel in which you can enter the values of A and B, and the operation to use, and the machine’s output would consist, say, of a LED being either off or on.

From this user level, there is no possibility to tell whether you are working on a Sheffer machine implementing the Boolean machine, or on a Boolean machine implemented directly on a hardware level, with dedicated and, not, or, etc. gates to perform the requested operations. The only way to do so would be if some means of special access were provided – some way to view the lower-level architecture, either through some software-based means, or directly via screwdriver-interfacing. In both cases, you’re leaving the user level, though in the former, the user level itself provides a means for you to do so (or more accurately, the lower level and the user level are both accessible), which is fairly typical for common computers and programs. But if such a ‘bridge’ to a lower level exists, then the Sheffer machine isn’t implementing the Boolean machine, but something that includes the functionality of a Boolean machine plus whatever is needed to provide a bridge (or more likely, both Sheffer and Boolean machines are implemented on some other, more powerful architecture that governs such level-crossings). But even then, the user can only move within the confines dictated by the emulated machine.

So I don’t think that the user decides which interactions ‘count’; rather, the machine decides what interactions it lets the user see, and thus defines the user level, beyond which the user can’t see – that in fact seems to me a working definition of ‘user level’: whatever the user can see. If a machine faithfully implements a mind, there is no way in principle for the user to see beyond that implementation – otherwise, the machine would not implement a mind, but rather something containing the functionality of a mind, plus a way to see beyond that implementation.

I’d been meaning to address this, but forgot about it. At first, it struck me as an insightful objection, but now I’m wondering if it’s actually true – isn’t it possible, if given an as large as necessary database of Chinese conversations, to abstract rules from the interrelationship of the characters, without any ‘understanding’ of the conversations? I can certainly think of similar, if simpler, examples from science, where without an understanding of the ‘deeper physics’, phenomenologically correct rules were abstracted from observational data – one prominent example is Planck’s introduction of his constant into the model of black body radiation, which he at first viewed as a mere mathematical trick, and only later was realized to be the first hint of the quantum nature of reality. Indeed, this is arguably how physics usually proceeds, if one neglects those few, outstanding examples where new understanding has been gained through pure reflection again, as in the case of Einstein’s famous thought experiments.

[quote=“Frylock, post:232, topic:582787”]

Ah–I think you’re talking about the syntax of the words, while I’m talking about the syntax of the letters. Your example of a syntactically equivalent string preserves word-level syntax, but not letter-level syntax. I’m not sure exactly what significance that miscommunication (?) has for our conversation so I’ll just throw the ball back at you. :wink:

Letter level syntax :confused::confused::confused: Am I being whooshed here. The mapping of a string of letters into a meaning is semantics, partially.

That’s about the requirements. Whether the program meets the requirements can’t be determined without semantics either.

As long as the domain of the conversation is limited.

For example, if the input is “Is it raining?”, no amount of analysis of conversational history will provide the additional sensory input to arrive at the right answer.

[quote=“Voyager, post:235, topic:582787”]

I’m not talking about mapping anything to meanings.

Take the following string of letters:

Dogs are canines

The following string is letter-syntactically identical to the above string:

Lpfruknsuqkoxosr

The second string simply replaces each symbol (letter or space) from the first string with a new one, following the rule that when the same symbol appears in the first string in more than one place, so also the same symbol appears in the second string in the two corresponding places.

Syntax is all and only about the spatial relations between tokens*, and rules for manipulating them in those terms.

*By “tokens” I mean instances of types, where the types might be letters, words, or whatever you like. Letter-syntax (a word I made up in this post) is syntax concerning tokens which are instances of letter-types. Here are three instances of the “A” letter-type: A A A.

Every passage written in English is definable purely in terms of its letter-syntax. This does not get you the semantics of the passage–but that’s the point. English passages are completely definable in terms of their syntax, but the semantics of the passages requires something beyond the syntax.

Similarly, computer programs are definable purely in terms of their syntax, (we can fully specify them simply by writing down strings of letters after all), but their semantics comes from something other than their syntax.

Searle believed that a lot of Strong AI researchers thought you could get any computer to understand Chinese simply by giving it the right program. Searle believed this to be a mistake, and his reason for thinking this was, a program is definable purely syntactically, but understanding of Chinese is not definable purely syntactically. The man in the Chinese Room (argues Searle) is doing all the right syntactical manipulation of strings, but does not have understanding thereby.

Searle believed in a distinction between what we might call in this context “ascribed semantics” and “original semantics”. I ascribe semantics to a string when I give the string meaning by my act of interpretation. But (thought Searle) a mind has semantics not in an ascribed way but “originally”–its semantics don’t exist as the result of anyone ascribing any meaning to it. The mind’s concepts mean what they mean independently of any act of ascription.

This distinction continues to be controversial (Dennett denies there is any such distinction, many others think there is such a distinction but that it doesn’t have the significance Searle thinks it has, myself I have a the view–which I don’t think anyone else has!–that almost all semantics are original, even most of those people would typically think are derived) but its a distinction Searle thought accurate and relevant so I mention it here in hopes it may clarify something. Probably I’ve accomplished the opposite!

At all times I am talking about the “system” and never the man - the system understands (under the proper conditions). Because, to me, understanding is related to properly pulling together the different pieces of stored information and then, if required, being able to correctly manipulate the information and arrive at new positions that substantially match the others processing. To play “what if” and correctly infer the consequences of actions, etc.
Regarding rule following: it all depends on what you call a “rule”. There is a continuum of processing that can occur, as I see it.

One end of the spectrum
Pure input to output mapping - no intermediate processing

Middle of the spectrum
Many tricks, some mapping, some intermediate processing to build/realize a dynamic structure (model) with references to past experiences stored - and possibly manipulation of the model to help determine response, etc.

Other end of the spectrum
Pure processing, no stored input to output mappings, every new request requires re-evaluation of a complete internal model from start to finish, and evaluating possible results - nothing stored
I think we humans are in the middle (kind of) of the spectrum and it’s what we call understanding. Even if the other two ends of the spectrum can both arrive at the “correct” answer, neither of them feel like understanding to me. They can be effective strategies without being what I would call understanding.

A few things:

  1. The description of the Cartesian theater that I read on the website you linked did not seem like a compelling argument due to it’s simplistic and extreme nature. If modeling in the mind is represented by essentially re-drawing sensory input on an internal screen and then having to interpret - then I agree - but when I use the term “model”, that is not what I am picturing.

  2. I’m not opposed to the idea that “I” am really an after the fact representation of what happened and that there is no “I” interpreting a model and making a decision. But I also wouldn’t say things have to be that way, and I also wouldn’t throw out the possibility that both things are happening simultaneously. The brain is clearly a complex machine with unique characteristics, I see evidence for both.

Why the requirement to point to a real tree? I would think it’s quite the opposite, the tree reference is pointed to or refers to our past internal states when a tree was encountered (e.g. sight, etc.).

Not only is it not negated, it’s one of our most powerful tricks.

Mentally exploring the application of different rules with different objects and ideas provides us with tremendously flexible problem solving capabilities.

Did you read my post above about estimating the height of a tree by mentally stacking copies of my on top of itself?

The process of setting aside a similar size space in my mind and then moving it to sit on top of my house assisted me with my estimation.

Do you think I could have just stood there without any mental manipulation and arrived at a similarly good guess? Do you think my mental gymnastics were not valuable in that case?

Ok, let’s be explicit.

What information that already existed in my head regarding my house would have allowed me to project a similar size object relative to my position and angle onto the scene I was viewing?

I would argue the information doesn’t exist in precise enough form for that task. Instead my mind grabbed an object from the current visual scene and created a “place holder” object of similar size and superimposed it onto the scene (at some abstract level).

As stated previously, I think we have many methods and tricks for information processing. Filling in a blind spot, to me, does not seem to occupy the same category of tricks as purposefully, explicitly manipulating a mental image over an extended period of time.

Two completely different mechanisms at work.

Disagree.

If the modeling process is the method of computation, whether you have a centralized self or just a collection of neurons, it can still be the method of computation. The replacement for the self is merely the “result” analyzer function, or maybe the entire modeling apparatus is a multi-step, load, transform, react, process performed all by the exact same set of neurons but that triggers the response from those neurons.

I see a few critical things that to me represent modeling:

  1. Access to stored entities/objects/ideas
  2. A dynamic and flexible capability to pull together any number of objects and ideas into “short term” or “current” processing
  3. The ability to explicitly manipulate these objects over an extended period of time and internally review the results

Whether we are a zombie or not, these same capabilities can be used to infer future results, and I think they exist to provide a general problem solving ability instead of only allow problem solving to occur with items that have been explicitly related in the past.

[quote=“Frylock, post:237, topic:582787”]

I suppose you could write a grammar with "u"s as word separators. At this level, however, you would be hard pressed to distinguish it from German or Spanish. But you could write a grammar in the normal way under this coding, which can be proven by taking a normal English grammar and adding productions which map letters to their English equivalents. Spelling does not count - I had a professor who said that, since he was not a good speller, he wrote his Fortran compiler with lots of misspelling of keywords that mapped into the right ones. When I later wrote compilers I discovered how trivial this is to do.

Manipulating them in the sense of breaking down higher level grammatical constructs into tokens, perhaps. Doing pure parsing you just recognize that a bunch of tokens match a production of the grammar, which should be unambiguous if your grammar is correct. Now, if you write a parser using lex and yacc, say, you do something to tokens, like putting them into a hash table, but that is adding semantics on to the parsing so the code does something useful. It is possible to just parse.

If by definable you mean “a properly formed string within the rules of that language” than I agree - and I’ve been saying this for quite some time. But this is far different from the claim that you can understand a program or sentence based on syntax only. I think using “understand” to mean “determine that it is a properly structured string” is pushing the meaning of understand off a cliff.

I think you have correctly described where Searle goes off the tracks. Except that even he allows some semantics, because deciding which card to respond with for an input is a semantic operation. If the room just said “correct Chinese” or “incorrect Chinese” then it would be pure syntax.

Which anyone who has had the pleasure of seeing a child learn the semantics of speech and the world knows is absurd.

I’d suspect that the first semantics came from associating a given string of grunts with the semantic concept “the lion is behind that bush.” Even dogs can do this, if not originate it. My golden certainly knows the meaning of certain strings, but they are a fairly direct mapping, because she interprets “Nebula, do your business” differently from “do your business.” A book on dog training I read said that humans and primates in general repeat things for emphasis, and this doesn’t work for dogs, since they hear “Go, go, go” as a different string from “Go.” Our old dog, half border collie and a genius, could handle either of these, but he could also reason abstractly to some degree.

I think other people have mentioned that Searle’s model could never pass a Turing test in any reasonable way, since it is not self-modifying and does not have enough state. He seems to have an overly simplistic model of how computers work.

To be definable syntactically, it doesn’t have to be a well-formed formula. Formulas that are not well-formed are still definable purely syntactically. That’s what makes them formulas, for one thing–their purely syntactic definability.

Who do you think has said this?

Someone who buys into the derived/original distinction might agree with you that all words have semantics only in the derived sense (in fact most who buy the distinction do say this, though I don’t) but the idea is that the human mind utilizes concepts (not words–concepts) which don’t have their meaning because anyone ascribed that meaning to them, but rather by some other means.

This is a very bad objection against Searle. The Chinese Room can be as self-modifying as you like, for one thing–there’s nothing about the scenario that disallows this–and it is explicitly stated that you can put as many states into the Chinese Room as you like. None of that gets anywhere near the meat of the matter when it comes to what’s wrong with the thought experiment.