My first thought is that for something to be intelligent…
well…
Let’s say I play some super duper chess computer and it kicks my butt…so I say…Ok, let’s play Monopoly now…
If it can’t do that…then it fails the Turing test to me. A chess program that kicks my butt…not necessarily intelligent. Something that can learn and play 2 completely different, unpredetermined games against me, even if it loses both…very possibly intelligent.
That’s easy enough to fix: Just replace “conversation” with “interaction”. If I can interact extensively with an entity without being able to determine that the entity is not intelligent, then I shall assume that it is.
Quoth RaftPeople:
People always get hung up on these tabulated solutions, because that’s “too easy”. But the fact is, for problems significantly more complex than Tic-Tac-Toe, it’s considerably harder than a “genuinely intelligent” approach. To tabulate a solution to chess, in a straightforward manner, would require considerably more space than the entire observable Universe, and you’d inevitably end up with situations where you’d have to go find a cell in the table that’s billions of lightyears away to know what your next move should be. Now, this could be brought down to a manageable size via data compression methods, but any data compression method which brought the chess table down to a manageable size would itself be effectively a chess-playing algorithm.
But knowing or not knowing how your neurons exactly work has nothing to do with whether I consider you intelligent. Why would it change for a computer?
What about Left4Dead’s AI “Director”? It faces the relatively novel problem of generating a challenging but not overwhelmingly challenging but not guaranteed to survive challenging while still maintining emotional highs and lows. Sometimes I really feel like the director is just being spiteful while still enjoying play immensely. Now, the director cannot “learn”, AFAIK, in the sense that it has no game-to-game memory. So, how would this director have to act to demonstrate intelligence?
This is where I’m having a hard time. I like DSeid’s definition, as a first pass, but as I consider it, I realize that it is a bit too vague to help. I’m using my computer of the future and it does something that makes me question whether it is intelligent–what? I’m playing against a bot and it behaves in such a way that surprises me in its aptitude–how? In some ways I feel like acting independently of my commands is a sign of intelligence, but in others, obeying my commands is a sign of intelligence. Computers face novel situations every instant by some standard, or do incredibly routine things constantly and never face novel situations by others. Is AI here or is AI impossible, then?
OK, so a computer is supposed to “learn” something. I know how I teach a child to perform some new arithmetic operation, and I have some vague, ill-defined yet perfectly adequate tests for initial mastery. Is this what it means to learn? --then a computer can never learn until we equip it with sensors like a child. But then, if a computer uses a Monte Carlo simulation technique to acheive proficiency at go, begins to track openings and situations to create a move-book, has it learned? (It may “learn” quicker than an adult would, actually, given the performance of the bot mentioned in the OP.) Why am I so compelled to explain the computer process when I have literally no idea about the process behind the child learning? It is this that makes me wonder whether AI is here right now but we refuse to see it because our standard for computers is a moving goal-post.
That’s what is kind of compelling, to me, about the Turing Test. It’s what we judge other people by constantly: we give some stimulus, and expect some range of responses. We have no concern with what is going on behind the scenes.
The reason you don’t need to know how my neurons work is because you have enough empirical evidence that a human brain can be used as a tool to solve a wide variety of problems. However, if you did have additional information about the human brain, and found out that it really was a lookup table specific to a particular set of inputs, I assume you would (I know I would) change your mind about whether it was intelligent.
Do you think that static lookup tables (like chinese room) should be considered intelligent?
If the static lookup table is sufficiently complex as to produce coherent conversation, like the Chinese room is posited to, then yes, of course. Saying “But it’s just a static lookup table!” is no more valid an objection than saying “But it’s just a tangle of neurons!”.
for me, the only really interesting thing about the quest for AI is the possibility of creating something that experiences an inner thought-life like I do (or like I think I do - but let’s not get into that argument).
I realise, of course, it’s philosophically impossible for me to ever know whether that truly is the case, and that’s what this whole thing is about, but that’s what I’d be looking for - not the ability to fool me into thinking I was talking to a human, but evidence suggesting that some kind of self-conscious cognition was really happening under the hood. It needn’t be clever, just real.
Perhaps some of the possibilities for such evidence would include things like:
the ability to express thoughts and desires not anticipated by the programmer
spontaneous expression of empathy
questions about morality, origins
Of course all those things could just be programmed to appear just to fool me - and so there’s another reason - practical this time - why I can’t ever be sure I’m talking to a being with a true sense of inner thought-life…
…But that concern aside, if the designer of an AI told me he was surprised when his creation asked where it came from, asked for justification behind some morally ambiguous instruction, or declared that it would rather study insects than play chess, I’d be on my way to accepting it as ‘true’ AI.
The point of the CRA is not that AI is impossible or that consciousness is false (although this may also be true, but it’s not the aesop in this tale). The point is that the man, the book, or the system does not understand Chinese because it can’t go from the symbols, the syntax, to what the symbols mean, the semantics.
The point is, you can’t make a machine and have it perform trillions of symbol operations and call it conscious. It would be at best a frighteningly realistic consciousness simulation. Some may believe it to be conscious because it would behave similarly to a person. But it wouldn’t, deep down inside, be more conscious than a calculator performing 2 + 2.
The real moral is that if you want to get true AI you need to figure out how the brain does it and get a working theory. You have to examine it and transfer its casual powers to inorganic material like silicon or whatever. At the present moment we have no clue. Obviously conscious machines are possible – here I am talking to you. My brain is an organic machine.
At least, that’s what Searle said. He may be full of crap but it made a lot more sense than people spinning in circles talking about billions of pieces of paper with Chinese scrawls on them and other non-sense like trying to ask him the right question to make his head explode. It’s an analogy to explain why we can’t put AIs on our desktop or computers as we currently know them. We need something completely different.
However, the demonstration is fatally flawed. One, we don’t get to question homunculi in the brain; two, even if we did, they couldn’t answer our questions in Chinese anyway. It’s like a philosophical conjuring trick. Distract you in a conversation about understanding with a human being.
Knowing how a neuron works does not do anything to convince me whether or not someone understands Chinese. Do you perform a brain examination to determine whether someone understands English?
It doesn’t mean it didn’t, since the thrust of the argument seems to be that knowing anything the operation of neurons isn’t how we demonstrate intelligence.
I think that we can say looking up an answer in a static table of answers based on input is not doing the things we normally consider to be part of intelligence. If we didn’t know how the chinese room operated then you would be correct, we couldn’t say one way or another, but once we do know that it’s a static lookup table then it would seem we can say that that does not represent planning, learning, etc.
How an individual neuron functions, or even how each neuron functions is not the same as knowing how the entire system functions because the system is made up of so much more than that in the case of the human brain. However, if we knew how the entire system operated (neurons, glial cells, chemical communications, etc.) then we could (probably) make a determination whether those functions did or did not represent intelligence.
It’s possible I’m not understanding your point, you seem to be stating that “even if we know how a system works, and the process is not really capable of any of the things we consider to be part of intelligence, we should still call it intelligent if it gets the right answer (appears intelligent).” Is this correct?
My feeling is:
If we know how a system works we can then either say it is intelligent or it is not intelligent
A static lookup table is not intelligent because it does not perform any of the functions we consider to be part of intelligence other than getting the right answer
Because the chinese room violates the laws of information theory and physics, etc. it’s probably not too valuable unless we limit it to a specific domain in which case it fails the minute we leave that domain so again it probably doesn’t help us much.
erislover: By definition (says Searle), whatever the brain is doing to create our subjective experience is not a formal syntax system which is manipulating symbols. It’s semantics. And there is a large gulf between the two. It’s for this reason we have a deeper understanding of what 2 + 2 really is compared to the fastest computer, even if we tried to program it to “understand.” For this reason, if we ever make true AI, it will have to be something a lot different than what we have now. It’s going to have to do brain things.
Maybe this is wrong. I don’t know enough about computer programing or neuroscience to answer that. But that’s the premise.
The CRA is an attack on the idea that you can just make a machine that can process y operations per second and make a “people program” and have it be an actual mind with internal subjective experiences. You may bring up the problem of other minds, true, but if this is all correct then presumably if we ever do make an artificial sentient we can contrast and compare and see some differences. I think. The problem is we aren’t anywhere near being able to do this yet, so we’re sorta groping around.
Yes but an intelligent but deaf person would never ask you to go play some music for him instead of playing chess. Computers have very limited sensory capabilities and their universe is limited by them. They have no input about insects, morals or their own creation. For this they will never tend to those things.
Which is also my issue with
A computer’s ability to interact with me is very limited, both in input and in output. Their intelligences will then be equally narrow, no matter how deep they go and this doesn’t make them less intelligent, just less perceivable as intelligent by us.
I don’t believe it is by definition, but I do agree that Searle feels there is a very clear distinction between syntax and semantics. The CR thought experiment fails to establish the existence and scope of the gulf, though, which is, to me, problematic, since that was its point.
I am sure this is the point of the argument. It just fails miserably, IMO.
We have made tremendous progress in understanding how each of the individual cells operates, how they communicate, how systems are organized, what is active when, etc. Yet even for the simplest system there is no understanding of how qualia emerge even if we can fully describe what is happening as it does. The ability to look at the basic structures and predict what behaviors will result is not even there - and I honestly doubt it ever will be. (This because any neural system capable of complex behaviors is massively nonlinear in its development and function and therefore follows many characteristics of chaotic systems - albeit a chaotic system impacted by regular external inputs - there are predictable attractor basins that the system falls into but the ability to predict the behavior given only starting conditions is liable to impossible - ignore this comment if the analogies are unfamiliar to you please.) Understanding complex behaviors and functions will not come from extreme reductionism and from understanding from the bottom up but rather from an understanding of happens at top levels.
In any case erislover’s point stands - how do you know that you have it if you can not clearly define what it is? A major theme expressed here by several posters is that we must avoid the very common and very self-serving trap of defining intelligence as “being like us.” That is wise.
We have a very particular set of intelligences adapted for our sets of inputs and our salient problems. These include but are not limited to our particular sensory inputs, and the problems inherent in existing within large social groups of other selfish individuals who we must both compete and cooperate with, and surviving in various changing environments and with variable food sources and various competitors (including others using intelligence as a tool) by adapting both our individual and our group behaviors.
As Sapo points out a different set of inputs will give rise a different set of salient problems. And that for another human! A different species of intelligence (another animal, a machine entity, an alien) may be adapted to very much different inputs and to very different sorts of needs (salient problems) - of course it will look different than us and may perform badly on those behaviors that we consider salient. But in relevant domains it may be much more intelligent than us, to the point that we may be unable to even understand the questions.
All of which is a long-winded way of asking you: given all that, what do you consider to be the “functions we consider to be part of intelligence other than getting the right answer”? I ask only that you exclude including things that merely define it as sharing our set of inputs and salient issues.
I sense many posters here handling the question of intelligence as a yes-no issue. It obviously is not. The are degrees of intelligence from hardly at all to brilliant and an infinite number of potential domains to be intelligent or not intelligent in.
Unfortunately eris that must result in a definition that is somewhat vague else it not be generalizable. The next step is to define the metrics for “novel” and the method for determining what is the salient problem for the entity in question.
Obviously as humans we hold an intelligence that can be applied to multiple domains as a valuable aspect of intelligence. I would hold that such is just one additional dimension of intelligence - how many domains can it be applied to, how novel of new salient problems can it be applied to? Both breadth and depth of problem solving skills are important aspects to quantify in a complete description of any particular intelligent system. But neither is more critical to the definition than the other.
I just want to emphasize: intelligence must be a measurable behavior and can exist with and without sentience.
I can claim that an ant colony can exhibit intelligent behavior and prove or be disproven based on demonstrable examples of behaviors that the colony engages in when presented with novel circumstances. Proving that the colony has sentience would be a nonfalsifiable claim even if our understanding of the systems organizations that appeared to lead to sentience in humans were well described and it was shown that information was transmitted in similar patterns in the colony; such might allow me to infer sentience but not prove it.
eris’s objections are valid; the CRA fails because it is not in actuality an attack on the intelligence of the room but on the sentience of the room - and any analysis of human sentience would fail in the same way - no neuron understands Chinese either.