What would count as artificial intelligence?

The AI debate always leaves me dissatisfied.

The dissatisfaction bgin with the unfortunately popular Chinese Room thought experiment. (Searle’s “brains cause minds” followup embarasses me for the species as a whole.) Nevertheless, a great many people find it, if not wholly convincing, at least on the right track. It’s as if we had a machine that passed the Turing test, one would look at the code and say, “See, it’s not really intelligent after all.”

One can see this well-illustrated in the comments from this slashdot posting about a computer go program running on a massive number of cores defeating a professional player (it received the largest normally-acceptable handicap). This program can’t possibly be intelligent, some said, because it uses Monte Carlo methods (a refinement on MC go methods called UCT greatly improved its play). Basically, without techical details, it plays a bunch of random games and decides what to do based on the cumulative details of those results.

So, someone opened the box, sees the man inside, and declares it unintelligent.

But there’s two problems I have with these Chinese Room-style “disproofs.”

First Problem
You don’t get to open the box. I don’t mean in the Turing Test, I mean in real life. The Chinese Room thought experiment is a conjuring trick: by placing a readily-identifiable intelligent creature in the box, one has an excellent locus for “something which must understand [Chinese]” and the disproof follows almost immediately. (It is actually worse: the original argument was given in the first-person perspective. But there is no such thing as a criterion for understading in the reflexive case, one just understands, or doesn’t.)

Second Problem
In real life, opening the box doesn’t help anyway. That is to say, if I doubted you were intelligent, opening your skull and examining your brain matter would not, in fact, settle the question for me. Mucking about in your head would get me nowhere at all to settling the question. So peeking at the code or opening the box or etc are ruses toward the bigger question.

My Big Problem
But that’s not my problem with AI. My problem with AI is that I don’t know what it would mean for a computer to be intelligent. Mice are intelligent. (I would think this whether or not I knew they could run a maze.) Whales are intelligent. But come to think of it, I don’t even get to question whales or mice. So what is the Turing Test doing for us? Very little. While passing the Turing Test would be a great indication of intelligence (to me), it would only be a kind of intelligence. In fact, a kind of intelligence that I don’t demand of most of the creatures on the planet.

Do we have to come up with some earth-shattering, once-and-for all, unambiguous, black and white definition of intelligence? I don’t think so. But we would have to come up with an idea of what it would mean for a computer to be intelligent. It wouldn’t behave like a human, or a mouse, or a squid. What would it do? Can we even answer this question? I cannot ask a dolphin even extremely basic questions and we’re both mammals. What the hell could I sympathize with in the case of artificial intelligence? My computer does something I do not expect: is it a bug, or is it acting on its own? (Must these be exclusive?)

It feels strange to suppose that a deterministic machine made up of relatively simple components, whose individual workings I could understand, could just be intelligent, but I must remind myself that the transistors and machine code are red herrings. Neurons don’t determine my ascription of “intelligent” to animals, so it is unfair to point to an algorithm or NAND gate and say, “See, can’t be intelligent.”

So what would count as artificial intelligence?

I don’t have an answer to your question, but these are a few of my thoughts:

  1. Chinese room doesn’t do much for me as an argument against ai
  2. Slashdot story and deep blue chess player don’t feel to me like AI even if they are good accomplishments
  3. To me (and I’m sure this isn’t very defendable when analyzed, just gut feel), it will feel closer to AI when the smarts are generalizable to other contexts, inferences can be made based on similar situations, etc.

Intelligence: The ability to come up with novel solutions to salient problems.

Intelligence can be narrow or broad; human-like or very dissimilar to human intelligence; machine-based or biologically based.

What is salient is specific to the intelligence in question. Whales may be much more intelligent than humans if the salient issue is keeping track of objects in several square kilometers of volume without the aid of technology, but much less intelligent if the salient task is writing a joke.

Intelligence does not necessarily mean sentient. Sentience may be a harder beast to define on the basis of outside observation and is often confused with intelligence in these discussions. Sentience refers to how the entity perceives: does it have a sense of self, does it experience qualia? That is of course hard to prove even for the case of other humans - we assume they do because we do and they behave as we do - but other than by that inference we have no way to know. Without that similarity do we take the entity’s word for it that they have an experience of self?

We have a lot of programs that display some characteristic of intelligence, and some of those are very good at what they do, but none of them do more than the one thing they were programmed to. If you have a machine that can learn to do novel things well, you have something that could reasonably be called intelligence.

For those following along on the edge, the Chinese Room thought experiment goes something like this: A person is sitting in a room, with a huge book. Every so often, someone slips a piece of paper into the room, covered with squiggles that the person has no comprehension of. But the book he has has pictures of the squiggles that he can look up, and instructions for drawing apparently-meaningless squiggles of his own, which he then passes out of the room.

Unbeknownst to the man in the room, the squiggles are Chinese writing, and the person outside passing him the squiggles is a native speaker of Chinese who is conducting a Turing test. Based on the instructions in the book, the person is drawing the correct Chinese characters to answer whatever the native speaker outside is writing, and thus the Turing test is passed.

But wait, comes the objection, this shows that the Turing test is a fraud, since the person in the room does not, in fact, understand Chinese! Here is where the argument fails: The person might not understand Chinese, but the book does. And if you can’t accept the notion of a book being intelligent, why can you accept the notion of a book containing instructions so detailed as to allow a person to write down meaningful communication in a language the person does not know?

Actually, there’s a simpler counter to the Chinese room objection. If you pass in a slip asking something like “What did the last slip say?”, the book won’t have the answer. But any Chinese speaker would be able to answer that.

This is thinking a bit too literally for the thought experiment, IMO. The box, by hypothesis, would pass the Turing Test.

I’ve seen at least one formulation of the Chinese Room thought experiment that mentions “previous squiggles” as part of the raw data used by the person consulting the book. IMO, that just makes it more obvious that the instructions in the book are part of an overall “system” that does understand Chinese. (Also, this addendum makes the attempt to undercut that interpretation with, “OK, what if we remove the book and just have the room occupant memorize the instructions without actually understanding how they work?” all the more ludicrous).

It’s a fine definition. So what would it mean for a computer to be intelligent? What kind of “salient problem” does a computer face? If you were wondering whether a computer was intelligent, how would you know?

Interesting point about sentience/intelligence. Thanks.

Going off on a bit of a tangent, I recall this example quoted in William Poundstone’s Labyrinths of Reason:

This is cited as an example of a question that could only be answered by some level of understanding of the story (and of the background world where it takes place) – no amount of mere mechanical parsing (of the sort that might answer questions like “What condition was the hamburger?”) or logic-chopping (strictly speaking, either answer is logically possible) will do the trick. Presumably, the “Chinese Room” is expected to answer similar story-based questions (posed in Chinese).

What in a mouse or whale makes you think they are intelligent?
My problem with the Turing test is that it is conversational. Conversation is just one very small part of what makes us intelligent and I am sure there are many people out there with assorted disabilities that are definitely intelligent but would fail a conversational test.

I would be more appreciative of tests similar to those given to small children to test for mental development.

To me, an intelligent computer would, as said already, be able to come up with solutions to a problem that were not programmed in.

I think there are cases already that would count as such. I understand that a building was designed where every structural element was a hydraulic actuator connected to a very simple computer and sensors intended to keep the building stable on an earthquake. Soon enough, the building was doing things that the engineers could not only predict but didn’t really understand, except for the fact that it did work to keep the building stable. You might argue that it was just an emergent property of a swarm, but then again, is intelligence something else?

Ditto for a system that was tested where airplanes would talk to each other and control traffic themselves without a centralized Air Traffic Control. It soon spun onto very sophisticated systems very efficient and scary looking to plain human eyes.

(and I am sorry I don’t have a cite for either, this was a long time ago on an article about flocking behaviors imitating intelligence. I was actually about to start a thread asking about this)

A blonde dyeing her hair brown.

Thank you, I’ll be here all week, try the veal! :smiley:

For me the basic problem with the chinese room is that it’s a static set of solutions to all problems. I don’t consider that to be intelligent anymore than enumerating all possible game conditions (e.g. chess).

What humans and animals do that seems intelligent is we solve problems with limited resources. We can 't simply enumerate all possibilities, so we classify, categorize, abstract, use logic to infer consequences and then map that to situations not identical but similar to previous situations.

The chinese room could be simplified by asking “what if we just enumerate all possibilities” and I would say, great knock yourself out (just like that ai prof in texas back in the early 80’s that was keying in every word known to man and some representation of what the word meant), it’s a brute force approach that doesn’t generalize and just isn’t what I would consider ai in the first place.

The technically correct answer to that question is “Unknown, insufficient information.”

Assuming our AI has knowledge of human culture (or, in the Chinese room version, assuming the book has knowledge of human culture), it might answer “probably not” instead. Either answer is possible from a human, so the question doesn’t help our intelligence understanding efforts at all.

The Chinese Room idea is a fraud anyway, as Chronos said, it’s just shifting the emphasis from a person/AI to a book, which people instinctively say can’t be intelligent. But in the thought experiment, it is. The only thing the experiment demonstrates is that a person can act as a relay even without understanding the information it is relaying. Which is stunningly uninteresting.

As for the OP:

If we’re going with DSeid’s definition (a fair one, I’d say), then one can quite easily say that computer AI is intelligent. Let’s take a FPS bot as an example. It can (I assume, it sounds simple, I’m not a programmer though) be programmed to try various things, and only stick with the things that reduce it’s death rate. I imagine this would quickly result in strategies such as strafing (or just randomly moving) an opponent. Obviously not novel to anyone who has played FPS games, but it would be novel to our AI, and indeed probably to many non-gamers. So it’s come up with a novel solution to the only problem it faces (killing you).

Obviously this is an extremely limited AI, but it exists in an extremely limited world. I don’t really see why the same experiment couldn’t be conducted in real life. In fact, I would say it already has. It’s called evolution. Problem for us is, it takes a bloody long time. Learning from all the countless possible mistakes one can make takes a long, long time.

So I would say the question is not so much “how can we develop AI?” as “how can we create a shortcut to AI, skipping the incredibly long learning process?” A rather trickier problem!

The argument that crops up here hinges on the meaning of “novel”. Association tables, finite state machines, etc. are pretty easy to implement. The issue is that the “various things” you mention are likely hard-coded into a decision process, thus making them distinctly not novel (at least by some/many/most definitions).

In the chinese room experiment, the person passing the notes understands chinese just as much as my voicebox understands english. They are both just relays, as mentioned above. The intelligent being is the book.

I’m going to answer the basic question in the title, because there is a serious and problematic conufsion in the issue as a whole.

A thing can be said to be Intelligent if it can analyze, experiment, or inductively reason a solution to an unforeseen problem and can do so non-randomly.

That does not mean it can understand language (though it might) or solve any type of problem (though it might). Nor does the thing have to have any particular interest in the problem. Most animals, by this standard, are not intelligent, or have highly limited intelligence.

A thing may be said to have Consciousness when it can choose what it wishes to do. It may or may not have many options, or even be Intelligent. Animals are defintely conscious; they can choose between certain options depending on what they feel like. Animals can be lazy, friendly, and learn and unlearn new behaviors - some are just stubborn, and others eager.

A thing may be said to have other attributes, too. I will mention one.

A thing may be said to have Meta-Cognition when it can observe the contradictions between different levels of analyisis. In humans, this manifests as often as Humor, where we may observe the contrast and incongruity of different modes of thinking, language, and physicality, and normalcy.

Thank you. I think that I stole the intelligence definition from Pinker’s How the Mind Works btw but I cannot swear to that being where I harvested it from.

And I would answer as Discordia did. Salient depends on the domain. Is the intelligence in question playing Go, or genrating and testing a hypothesis about the nature of Dark Energy or playing poker or creating a sonnet or what? Salient domains can be narrow or broad and are arbitrary.

It would seem that the “Chinese Room” thought experiment assumes the opposite of what it seeks to establish – by postulating that an algorithm (the rules in the book) for generating convincing Turing-test responses is possible, it stipulates that what we understand as “thought” is a purely mechanical process. Parabolic reasoning rather than circular reasoning, one might call it.