The AI debate always leaves me dissatisfied.
The dissatisfaction bgin with the unfortunately popular Chinese Room thought experiment. (Searle’s “brains cause minds” followup embarasses me for the species as a whole.) Nevertheless, a great many people find it, if not wholly convincing, at least on the right track. It’s as if we had a machine that passed the Turing test, one would look at the code and say, “See, it’s not really intelligent after all.”
One can see this well-illustrated in the comments from this slashdot posting about a computer go program running on a massive number of cores defeating a professional player (it received the largest normally-acceptable handicap). This program can’t possibly be intelligent, some said, because it uses Monte Carlo methods (a refinement on MC go methods called UCT greatly improved its play). Basically, without techical details, it plays a bunch of random games and decides what to do based on the cumulative details of those results.
So, someone opened the box, sees the man inside, and declares it unintelligent.
But there’s two problems I have with these Chinese Room-style “disproofs.”
First Problem
You don’t get to open the box. I don’t mean in the Turing Test, I mean in real life. The Chinese Room thought experiment is a conjuring trick: by placing a readily-identifiable intelligent creature in the box, one has an excellent locus for “something which must understand [Chinese]” and the disproof follows almost immediately. (It is actually worse: the original argument was given in the first-person perspective. But there is no such thing as a criterion for understading in the reflexive case, one just understands, or doesn’t.)
Second Problem
In real life, opening the box doesn’t help anyway. That is to say, if I doubted you were intelligent, opening your skull and examining your brain matter would not, in fact, settle the question for me. Mucking about in your head would get me nowhere at all to settling the question. So peeking at the code or opening the box or etc are ruses toward the bigger question.
My Big Problem
But that’s not my problem with AI. My problem with AI is that I don’t know what it would mean for a computer to be intelligent. Mice are intelligent. (I would think this whether or not I knew they could run a maze.) Whales are intelligent. But come to think of it, I don’t even get to question whales or mice. So what is the Turing Test doing for us? Very little. While passing the Turing Test would be a great indication of intelligence (to me), it would only be a kind of intelligence. In fact, a kind of intelligence that I don’t demand of most of the creatures on the planet.
Do we have to come up with some earth-shattering, once-and-for all, unambiguous, black and white definition of intelligence? I don’t think so. But we would have to come up with an idea of what it would mean for a computer to be intelligent. It wouldn’t behave like a human, or a mouse, or a squid. What would it do? Can we even answer this question? I cannot ask a dolphin even extremely basic questions and we’re both mammals. What the hell could I sympathize with in the case of artificial intelligence? My computer does something I do not expect: is it a bug, or is it acting on its own? (Must these be exclusive?)
It feels strange to suppose that a deterministic machine made up of relatively simple components, whose individual workings I could understand, could just be intelligent, but I must remind myself that the transistors and machine code are red herrings. Neurons don’t determine my ascription of “intelligent” to animals, so it is unfair to point to an algorithm or NAND gate and say, “See, can’t be intelligent.”
So what would count as artificial intelligence?