Strong AI and the Chinese Room

Sure a computer can be conscious if only we were able to define what causes such to emerge from the parts.

Some other definitions offered up in the past: (bolding added)
http://boards.straightdope.com/sdmb/showthread.php?threadid=117034&perpage=50&pagenumber=2

Thus the self strange looping into itself creates consciousness. Any system that functions with a similar structure can thus be presumed to have some form of consciousness emerging within it.

I’m not even sure that the idea of duality is necessarily a huge stumbling block to the idea of AI; if you believe that souls come into being as a result of the growth of a mind; maybe even if you believe that a deity of some sort creates a soul in response to the presence of a suitable receptacle.

I don’t think there is any way you could, just like there’s no way to tell for sure if I have the same kind of inner life as you or that I see the same blue as you see.

Precisely.

I’d demand that a person who discounts the possibility of AI because computers can’t have experiences to produce proof that they have experiences.

So what would we require to presume that an AI has a conscious inner experience similar to our own.

We presume that of other humans because we expereince such an experience ourselves, and other humans act enough like us and are built enough like us, they likely do as well.

The popular way of presenting (but not entirely accurately) Turing’s test would say that if it acts like us (enough that you can’t tell from its behavior that it isn’t) then it must have the same internal experience. A point raised in this thread is that the output of behavior, eg translation of Chinese, does not mean that it is the same internal experience.

So is the nature of that internal experience a function of a seperate soul, or of the nature of the materials we are built of, or of how that material is organized?

If your thought patterns, with all of its the massive nonlinearality and strange loopy self-referentiality, were somehow all exactly replicated in an AI of sufficient size and complexity - would that AI experience your consciousness? Or at least one similar? Would it be you in the machine?

Which is the point of defining what it is about how we are organized that results in a consciousness of our selfs, and then judging an AI sentience not only on outputs but on the organization of the system that produces it.

Here’s a question for all of you: How do you know that I am not an AI?

I just started a thread about this:
http://boards.straightdope.com/sdmb/showthread.php?s=&threadid=148876

I need to watch my back so partly warmer doesn’t try to kill me in my sleep.

Golly.  I'd expect someone who'd spent time as an AI project manager to be able to understand the difference between Artificial Life and an AI program.    Given the amount of hardware needed to support an artificially intelligent entity, it's not at all likely that one could spread in an unrestricted fashion;  bad sci-fi movies to the contrary.      There are technologies (nanotechnology and genetic manipulation) that pose this kind of problem, but they are unrelated to the current discussion.

The biggest problem with artificial intelligence is that it could displace knowledge workers, but again, given the level of hardware required – not too likely in the near future.

To get back to the OP, the primary flaw in Searle’s Chinese Room argument is that it assumes it’s actually possible to create the “lookup tables” that convert from English to Chinese in a mechanical fashion. There are an infinite number of possible inputs, few of which correlate one to one with a Chinese utterance. So unless the intermediary can reason about the input and generate a reasonable response, you can’t, even in theory, build the system that Searle proposes.

Take, for example, the famous translation of the Coke slogan “Coke adds life”. I believe it translated, more or less, to “Coke will raise the spirits of your dead ancestors”. In order to get a reasonable response, you’d have to reason about what the advertising slogan meant to say, and find a corresponding utterance in the Chinese culture. This is a creative endeavor, as anyone who has ever tried to translate imagery from one language to another would attest. Not only is it creative, but there’s no one unique answer. This pretty much eliminates the lookup table approach.

People are programmed to consider certain signals to be evidence of “humanity”.

Our visual system is designed to identify faces as categorically distinct. More interestingly, robots designed to simulate human facial expressions typically cause people to respond to them as if they could understand what was said to them (verbally and nonverbally), despite the robot having absolutely no ability to process information in non-expressive ways.

I suspect that humans are fundamentally unable to accept computers/robots/programs as either “alive” or “intelligent” because they don’t have the characteristics that we instinctively associate with either life or sentience.

There is not and infinite number of possible inputs or outputs if the input uses proper grammer and word usage, and if all nonsense inputs are avoided. To lesson the vast number you could use a keyword look up system based on what keywords and the combinations of them. Granted, the number is very, very large and it would take the person a very long time to look it up. If he cant find it, he could pass something out which is the translation of “I don’t understand”. You could even make a lookup table for those some errors based on key words.

And? You could create a symbolic lookup table that translates words into symbolic images, and another to turn those images into another language. Heck, you could cross-link images with each other, to express “deoxyribonucleaic acid” as “double helical chain of neucleotides held together by deoxyribose”, if there was no direct Chinese translation for DNA.

let’s be careful not to misrepresent the argument presented. it is, after all, a gedankenexperiment. such a system of lookup tables built out of books or a digital program might be impossible in terms of practicality, but those are just the sorts of issues we seek to ignore in formulating thought experiments.

there are a finite number of english words and symbols, and a finite number of chinese words and symbols, so the number of combinations possible is finite, for all sentences of finite length. yes it would take a while, but again, that is an issue that is not relevant.

searle would maintain his argument if we used more optimized versions than lookup tables, such as an ANN to interpret symbols determined by a hash table lookup. hidden markov models are generally used to solve problems such as these, too, and can provide “mistakes”, since a human would not believe another human was infallible.

so, no, it’s not impossible. not easy, to be sure. but it’s possible, and the practicality of the situation is not relevant.

Well, it is true that you could have a finite set of utterances, given a finite set of symbols and sentences of a finite length. But you can’t depend on a finite sequence of utterances and replies. The same utterance will require a different reply depending on context. So your lookup table would have to be context sensitive (I’m probably not using this in the strict formal language sense) in that for each utterance, you’d have to look up the entire previous discourse. So either you limit the length of the discourse (which makes the argument completely artificial) or you’re proposing something that isn’t even theoretically possible. Either way, I think you have a straw man.

Note that if you add sensory input into the mix (which most AI researchers would agrees is necessary), then you start dealing with fuzzy, noisy analog data (e.g. floating point numbers) so the idea of a finite grammar isn’t even remotely possible.
Searle’s * gedankenexperiment * argument proposes an algorithm that is not only impossible to actually implement but one which no sophisticated researcher would ever propose for implementing an AI system. So arguing that the mechanism that implements this algorithm is not intelligent is pretty vacuous.

I think that was what Searle was trying to illustrate when he wrote about the Chinese room… that language cannot be reduced to a manipulation of symbols.

Huh? How was that shown?

Certainly there is.

Bob laughed.

Bob laughed and Lily cried.

Bob laughed, Lily cried, and Ted jumped.

Bob laughed, Lily cried, Ted jumped, and Erica gave blood.

Bob laughed, Lily cried, Ted jumped, Erica gave blood, and Edward laughed too.

Bob laughed, Lily cried, Ted jumped, Erica gave blood, Edward laughed too, and Fraser watched TV.

and so forth.

That was one of the ways that Chomsky demonstrated that behaviorism was insufficient to account for human linguistic behaviour.

I would like Searle to teach English to someone with no eyes, no hands, no ears, no smell, and no sense of touch. Until such time, I remain convinced that the Chinese Room understands Chinese as much as anyone does: when questioned, it gives the proper response. That is, more or less, all I have to go on for everyone else.

Attributing consciousness when consciousness is defined as “inner experiences” is quite irresponsible. It would be the equivalent of seeing one black swan and asserting with great confidence—near perfect certainty—that all swans are black. And I suppose, in a degenerate sort of way, if one could never see a different swan than this it would be true in a solipsistic way…

:confused: Can you expand on this demonstration? I don’t see it as immediately obvious.

Look at it in the keyword scheme:

Bob laughedTed jumped Erica gave blood [Edward laughed] {too}(, and) Fraser watched TV

So even in that pyramid you gave as an example it all boils down to:
[Bob laughed]
(,)
[Ted jumped]
[Erica gave blood]
[Edward laughed]
{too}
(, and)
[Fraser watched TV]
(.)

Extend this and you will eventually have the beginings of a usable, finite system.

actually, if you stipulate that sentences be finite in length, you are quite incorrect in your account.

the number of sentences may be obscenely large, but it boils down to all finite combinations among a finite number of finite sets, of which there are finitely many.

so you would eventually run out of things to add onto your sentence.

To be technically correct, it would be finite permutations. The order is important, or conceivably could be important, given the methodological rule with which one was constructing the sentences. Of course, given such a methodlogical rule, we find ourselves smack dab in a behavioristic description, even if we are not asserting behaviorism proper, so I’m going to try and find Chomsky’s refutation of behaviorism.