Hello all! Long time listener, first time caller.
From this thread.
I haven’t read the argument Cecil is quoting, so I’ll have to presume that he’s explaining it well enough to argue against it.
I’m not sure this would convince someone the box understood Chinese, in the normal sense people think of ‘understand.’ First, in order to understand Chinese squiggles and return the appropriate squiggles, the ‘rulebook’ would likely have to be larger than. . .well, logically larger than every book ever written in Chinese and probably a great deal larger than that since any expression or question or observation—including every sentance from every book and any thought anyone could think—would need an appropriate comment. So it would take you. . .I’d guess several hours at best and possibly several days to look up the right response. I wouldn’t think the box understood Chinese, hell I’d probably think it was broken or that it had someone inside it that took the message and walked around asking random people if they understood the squiggles. A scientist would never think the box understood chinese. She would quickly realize that some kind of simple lookup or comparison routine was running.
And this is an important point. People interpret language on the fly which is pretty spectacular. The box in the example can’t do this, can’t remotely do this, and wouldn’t convince anyone.
Ok, here’s the crux of my point. I don’t think you could do this. I don’t think one person, without learning Chinese could memorize the rulebook described in the earlier example. And that’s where I think the example breaks down. If I’m right, and it’d be impossible to memorize a rulebook that contained, let’s be ultraconservative and say 10,000 pages, then this argument is meaningless. It’d be like saying ‘Gravity isn’t absolute, because I can imagine a ball floating in the air, ignoring gravity.’ “Yes, but you can’t make a ball that does that.” Just because John Searle, the guy who put forth this argument, can imagine memorizing a rulebook containing the entirety of chinese syntax doesn’t mean it’s possible.
I don’t know when computers will be able to fool people into thinking a computer is a person, but I don’t think Searle’s argument means the Turing test is meaningless. I think it just means a human couldn’t pass the Turing test using the method he’s describing.
Furthermore, I think Turing was right, if I can’t tell the difference between a person and his responses to my behavior and a computer and its responses to my behavior, then the computer is thinking. Think of it this way. . .what other test is there?