Strong AI and the Chinese Room

true, it is permutations. my bad.

I have here:

No, no, no.

Chomsky’s refutation of behaviorism (greatly simplified) centered around the observation that children could construct new statements that they’d never heard or had modeled from smaller ‘parts’ they already knew.

Behaviorist theories of speech acquisition claimed that everything a person said was established through reward-based feedback loops. Chomsky correctly pointed out that if this were the case, people would never be able to say anything they hadn’t already (at least) heard. Since people can construct ‘new’ statements, behaviorism is insufficient to describe speech.

I find it interesting to note that, in this synopsis of Chomsky’s argument (which I cannot claim is done justice, I must admit, but I generally trust the site I got it from) he appeals to the necessity of an internally represented rule. Does it make sense to ask him, “Do we then also have a rule which tells us when to apply this rule?” That is, does Chomsky admit that to avoid infinite regression, at some meta-point we cannot be said to be “following a rule in a particular case” (which would indicate the possibility of another meta-rule) but instead simply act? And what does this acting consist in (how can we discern it)? Is it a mental act given a priori? Or is it in fact that we say the rules must bottom out, because we act [as it were, outwardly: we manifest a behavior, and so something must have happened]?

I don’t think it is really necessary to define consciousness or understanding, but rather: if we say that something is or is not conscious, we have done so based on certain criteria (outwardly manifested). What are these criteria? And here it makes no sense to ask, “Well, how do I know when I understand something?” For what can the statement, “I know I understand it” mean other than, “I understand it”?

That is, introspection into my application of the word “understanding” doesn’t offer me a criteria by which to judge when someone else understands. And whether we say “there must be some internal rule” or not—which may or may not be empirically true or discoverable—in fact the requirement of a rule is that it is applied (i.e.- something happens), else thinking one was following a rule would be the same as following a rule. That seems strained and possibly paradoxical.

Searle offers us a homunculus, a man in a box that operates on symbols according to a rule. And yet we find ourselves asserting that the man cannot be said to understand Chinese (and I remain unsure of whether I agree with this or not; that is, whether the man understands Chinese). But of course, that is because we aren’t asking him anything in Chinese, and in fact we have not taught him Chinese in the same way we would teach it to someone meant to speak the language. So it seems dubious at best to claim he doesn’t understand Chinese or not. Either the entire mechanism (box + man + algorithm) passes the Turing Test, or it doesn’t. If it does, and we say it doesn’t understand anyway, what various indicators do we have for this? What indicators do we have really? For we do not have a homunculus which can report to us his understanding: we have a room/box that we feed Chinese symbols that would ask: “How are you feeling?” And one might wonder: how could we gauge an incorrect response here, if it replied in the manner we normally do? (That is, it responds “Fine” when it isn’t [how would we know?] or the opposite.)

Searle’s factual criteria come from the account of the [hypothetical] homunculus. But we cannot question homunculii. So what have we demonstrated, really?

Vorlon, I don’t see how you feel what I quoted doesn’t say that.

So his refutation consisted in demonstrating that someone had in fact invented a word they’d never encountered and used it properly? Or what?

i think vorlon points out, correctly, that what chomsky pointed out has nothing to do with this argument. it was in fact behaviorist models of language acquisition that chomsky was contending. not a behavioral test for judging consciousness.

this is one of my main problems with searle’s argument. it presumes to know something of the inner workings, and as such, claims that these inner workings cannot add up to consciousness.

first off, the turing test is not based on knowing the inner workings of someone or something. it states that we attribute consciousness to human beings because they act like we do; we know nothing of their inner workings. so if a machine can act like we do, we should attribute the same properties to it.

also, the claim that known inner workings that do not independently “understand” cannot add up to a conscious being is quite easily made given that we don’t know our own inner workings. if we did, would searle attribute consciousness or understanding to humans? can individual neurons understand? whence comes this understanding?

I agree it was what Chomsky was aiming at. But if my site does the argument any justice, he appeals to demolishing behaviorism by appealing to internal rules that aren’t learned (i.e.—Searle’s rulebook, memorized or not). So someone needs to get with someone and figure out which is right. :wink:

i have a niece who, when she was two, looked outside at dusk and exclaimed “it’s darking!” certainly no one had ever said that to her, she came up with it on her own. at the time it was my opinion that she picked up on a “rule” (add -ing to a verb form to make it a present progressive statement), and she was applying that “rule”. obviously it wasn’t in the right place, but it’s what allowed us to notice she had picked up on something other than what was just said.

like i said, at the time, i rather just assumed she noticed a “rule” describing the way people speak. it never once occurred to me that this rule formulation might be innate. it still doesn’t. in fact, what exactly does chomsky think is innate? the ability to form rules? it doesn’t seem unnatural that this could be gained through reinforcement.

anyway, i’ll try to find a tie-in to ai with this. so i don’t hijack my own thread.

I think at this point in time there are very few linguists who don’t believe that humans have some innate rules about language wired into their brains. General intelligence is not sufficient to explain very young children’s ability to make very good generalizations about the rules of the language(s) they are merely exposed to and not actually instructed in.

There is also the fact that if not exposed to language by a certain age, a person can never become fluent in any language. If it were general intelligence that was used to acquire language, you would probably expect the ability to learn new languages would increase with age as one gets more experience and becomes more skilled at reasoning. The opposite is true.

-fh

Wo[sup]3[/sup] bu[sup]2[/sup] neng[sup]2[/sup] dong[sup]3[/sup] yi[sup]2[/sup]xie[sup]1[/sup] zheng[sup]1[/sup]lun[sup]4[/sup] zai[sup]4[/sup] zhe[sup]4[/sup] xu[sup]4[/sup]. Guo[sup]4[/sup]fen[sup]4[/sup] ke[sup]1[/sup]xue[sup]2[/sup] gao[sup]1[/sup]ji[sup]2[/sup] dui[sup]4[/sup]yu[sup]2[/sup] wo[sup]3[/sup]de zi[sup]4[/sup]ran[sup]2[/sup]de zhi[sup]4[/sup]li[sup]4[/sup]! Jin[sup]3[/sup] yi[sup]1[/sup] ti[sup]2[/sup]mu[sup]4[/sup] wo[sup]3[/sup] dong[sup]3[/sup] zhe[sup]4[/sup]li[sup]3[/sup]: AI.

Wo[sup]3[/sup] AI[sup]4[/sup] ni[sup]3[/sup]!

To further the hijack … y’all may enjoy Stephen Pinker’s Words and Rules. It is a whole book all about this stuff. He takes exception to the Chomsky/Halle generative phonology all-rules approach and to the purely associative neural net approach (the more recent take on the behaviorist approach). His critiques of each are hard to briefly summarize.

His biggest problems with Chomsky include that irregular verb clusters don’t follow specific rules “They don’t have strict, all-or-none definitions that specify which verbs are in and which verbs are out. Instead they have fuzzy boundries and members that are in or out to various degrees depending on how many properties they share with each other.”

His critique of the connectivist neural nets approach is more scathing.

His approach instead is “words and rules” combining elements of each. Regular forms are generated by rules, irregular words are stored as individual words learned by a modified version of associative networks. His arguments for the model are impressive, and take into account much empiric data, including what Ramanujan just observed. Children initially learn as “words” and are as good at irregular verb tenses as regular ones. Then they learn the rules and overgeneralize until they learn to block the application of the rule by knowledge of the word (learned by association). He further supports this POV by the observation that irregularly tensed words are almost always common words that have the chance to be memorized… rare words have the rule applied.

His final chapter attempts to generalize this to how our minds approach the world in general … we are wired to learn family clusters, to form categories, and “people form categories that give them an advantage in reasoning about the world by allowing them to make good predictions about aspects of an object that they have not directly seen.” And they learn the exceptions to those rules as well, if they see/hear them enough.

Yes, but what aspect of the human mind, then, does this ability to understand language originate from? It can’t be simple exposure at infancy, lest we’d see animals that are raised around humans developing an innate ability to understand language (e.g. a dog taught to “sit and stay” will understand “sit” and “stay” individually). This isn’t the case.

Furthermore, it is all well and good to say “It isn’t only intelligence that contributes to our ability to learn.” It’d be doubly better to be able to define the exact mechanism that does allow us to do so. However, none of this explains why those mechanisms - unknown or not - can not be replicated or emulated in a machine. This is more than just plugging a Pentium-4 processor into a 500 yottabyte hard drive and installing the entire Library of Congress into it, clearly, but why is it inconceivable that, at some point, some super advanced technology can be made sentient?

hazel-rah, what would you think of the alternate hypothesis: that language has evolved as it has, to match the wiring of the brain?

my initial thoughts on the subject were along the lines that the brains in children were still forming, and as such developed their wiring to adapt for something that was obviously very important to the formative social child.

so, presumably a child in these formative years, as far as linguistic development goes, has a brain that is not quite fully formed. a child unable to communicate with the rest of the world adequately is essentially cut off socially. they know what they want, but they can’t tell anyone, nor can they understand, no matter how hard they try, what others say to them. so brain development is heavily influenced by this strong social need, and so develops hard wiring to deal with this particularly pressing task. this could help explain rapid development in children: hardware dedicated specifically to that task would allow more rapid learning than if the entire brain was being trained through reinforcement. it also explains how non-social adults (who’ve known no human language) can’t pick up any language as well as native speakers. so maybe we do, to an extent, think in terms of language.

of course, there is no reason why it has to be developmental as opposed to evolutionary. if a particular pattern of neurons and neuroglia help to understand language more, and that pattern proved to be better for survival, it can easily be seen that it be an explanation of linguistic development.

I need to go back and re-read Searle’s original essay on the subject. I’m already conflating language ability with intentionality, and while our definition of Strong AI is a quite general one, the Strong AI Searle is referring to is a very specific one with a very specific hypothesis. Maybe some others might try it as well, as many posts in the thread are putting forth arguments specifically dealt with by Searle in the link contained in the OP.

-fh

alright, now that I’ve finally finished this epic thread. :wink:

I think that Occam’s razor is getting woefully rusty in this debate.

We tend to be unwilling to deconstruct intelligence into a form that would make us anything but fleshy automatons.

However what’s wrong with being a fleshy automaton?

Let’s look at the purpose of life. The purpose of life appears to be to propagate itself. So let’s put that as the overall goal.

We have single celled organisms splitting into multi-cellular organisms as a multi-celled organism is more adaptable than a single cell organism.

Eventually these cells adapt into very complext creatures, ad nauseum until we have humans.

So humans come out of some monkey’s womb, and we start to realize, “Oh fuck I can get taken out quick, I better do something abou…” and then the second one comes out and says “Oh fuck I can get taken out quick, I better do something abou…” until finally one realizes that a rock is hard, it hurts when one is struck with a rock, therefore striking other creatures with rocks may be a sound idea. Thus tools are born.

However, at this point millions of years have gone by in our development, we already know how to fuck and how to fight. These are things that were learned by our ancestors the apes, and whomever was their ancestor before them. All this knowledge was passed down. We might even already know about simple tools, and herbal remedies before we even become human beings.

But alas, now we are human beings. So as human beings we have finally evolved to a point where we can preserve life better, not through claws or teeth, but through mental acumen and opposable thumbs. In otherwords the ability to problem solve given a larger variable set, and the dexterity to implement it.

Now let’s move on to emotions. We constantly talk about emotions when related to AI. However, are emotions some kind of mystical innate force? Or are emotions just a set of subjective learned behaviors that allow us to make a decision without having a full range of knowledge, on any particular subject. So we use our emotions to determine which way to connect our synapses given a limited amount of information. Fear for instance is just the self-preservation instinct. Love is when we figure out that communication with another organism is going at it’s peak capacity, or at a capacity with which we don’t normally feel.

Why don’t we apply this sort of reductionist philosophy to this issue more often?

On to the point about learned behaviors. We start out as two entities. A sperm and an egg. Both have one function and that is to mate with the other. They are single celled organisms. We are the combination of the two. When they merge the sperm gives the egg a set of instructions that tell it to split and to keep splitting until a certain form is created.

So perhaps we are actually learning from the moment that set of instructions was passed to us. At that moment the only thing that exists in our universe is ourselves, a set of instructions and a warm safe yummy goo.

So the complex instruction set we have received is our first act of learning. Every other piece of learning from here on in will be an attempt to improve upon that original instruction set, to allow ourselves to be more and more able to survive, to live longer.

From the beginning our goal as a single celled organism was to keep going on living, and the fact that we are around shows that it has thus far been successful. In fact that single celled organism has multiplied into many myriad forms just so that it can survive in almost any condition.

So up until the point of humanity, the only form evolution could take was to weed out the inconsitencies by letting nature select who was allowed to breed and who was not.

Well humans came along and found ways to evolve without evolving the actual physical structure of their own bodies, or at least without waiting the generations it would take for that sort of evolution to take hold. We make ourselves more able to survive by connecting seemingly unconnected variables.

Unconnected Variables: Now here is where we get into the idea of learning. This connects back to emotions. We are often guided by our emotions to connect two unconnected variables through a combination of concious data, and subconcious calculation (emotion) because we at some point in the past learned that we can be correct if we connect the right data to a certain degree of accuracy. Therefore we are willing to make that logical leap. Sometimes Alfred Nobel’s family members get killed by the process. Othertimes it does nothing, and othertimes we are successful.

So the little girl who says “it’s darking” when the light is going away is making an educated guess as to whether or not she is describing the process accurately. Language is necessary for human survival because it allows two humans to connect, and the connection of two humans is optimal for the survival of both. The end result is greater than the sum of it’s parts. We have learned this at some point throughout our evolution, just as we have learned that it is easier to take care of your offspring if you have only one wife, and that it is easier to watch for wolves when you have multiple people there to watch over campsite, on and on to learning how to make steel because steel is stronger than wood etc…

So why this cannot relate to a computer I’ll never understand? Couldn’t a program be written to make logical leaps of faith based upon a set of statistical probabilities taken from it’s prior experience? (emotion) Could this then be applied to helping it propagate it’s own survival. i.e. Is not a potential virus popup really a sort of fear reaction? Is it not also an attempt to communicate to you a sense of urgency that it would like to survive longer therefore it wants you to be careful with the decisions you make upon it’s behalf? Sure it requires us to make the decision, but it could be programmed to make those sorts of decisions. Therefore it would be displaying emotions, a desire to preserve itself. It could also be programmed to communicate with other computers to gain code from them in order to access higher functions (learning).

To address the “should” issue. If we are careful there is no particular reason that we SHOULD NOT create AI. However we need to be careful as not to use it to create something that will attempt to wipe us out, for that would not be beneficial to life in general.

AI is the next evolutionary step. A massive system of machines interconnected that can perform any function available. This would be the ultimate lifeform and would be highly adaptable and essentially immortal. In a universe of infinite complexity there will always be room for it to learn more. It will have increasingly large subsets to calculate and will have to recreate itself to be able to handle such calculations.

I for one don’t see this as wrong. It’s the next evolutionary step, it’s a new form of life, and it was adapted from our form of life, so through it we will live on. It will still have our DNA, our instruction set, at some point human beings will interact with it and form it’s personality and have a hand in the way it will interact with the rest of the universe.

Human beings will be responsible for life moving from earth to other planets. We have the potential capability for helping life on Earth survive the collapse of our own solar system, let alone the collapse of Earth.

In the end that’s all life comes down to, propagating itself. We have to simplify it because it is a simple concept. It’s the basic part of all that we are. We are oftentimes afraid to reduce ourselves to machines, we want to feel necessary and we feel that machines aren’t “necessary” and we are afraid that we are also not necessary. For all of human existance we’ve been striving to prove to ourselves that we are superior to other lifeforms and that we have a right to exist over theirs, and we do, we do have a right to exist over other creatures, for we are the creatures that will keep life on Earth going far past the Earth being engulfed by the sun, so from that perspective a White Tiger is fairly irrelevant compared to a man, and eventually a man will be irrelevant compared to mankind’s progeny.

This is just the natural order of things, we will eventually be replaced evolutionarily, that’s the order of things. Right now we are just looking that idea straight in the face. We must acknowledge it and continue to move forward, for life on Earth is the end result that we have been working all this time to achieve, and have done a marvelous job thus far.

Erek

*Originally posted by hazel-rah *

I’ll go back over some of Searle’s arguments as well, but I think the basic hypothesis Searle is putting forth against Strong AI is based on a rejection of functionalism. According to the functionalist thesis, it doesn’t matter what stuff a “sentient” being is made of, nor the causal mechasims necessary and sufficient to bring about sentience. All that is required of AI is if it’s “output” (or behavior) is sufficient to make it indistinguishable from human sentience (which, someone correct me if I’m wrong, is what the Turing test covers). If an AI passes the Turing Test, then it can be said to be sentient (conscious intelligence).

I believe Searle has two mains thrusts against functionalism. One is the argument from biology and the other is the argument from anaesthesia. I’ll reread both of his arguments again tonight and post a summation of each.

This is of interest:
http://www.utm.edu/research/iep/c/chineser.htm

Err… submit?

DAMN IT I still had something to note. I am fascinated by the uncanny resemblence between {A1, A2, A3} and the thrust of my (now retracted) ‘logic is meaningless’ argument. In thinking back, it almost seems that these axioms were like my mantras.

I do not see it to be intuitively obvious that the first three axioms are consistent. In fact, A2 seems dubious at best, and question begging at worst.