Is Making Intelligent Computers Such a Good Idea?

Wolfpup, Voyager,

Thanks for the comments.

I am proposing that the mechanics of the computer decision making process are significant in attributing intelligence to a machine. Moving up one step from the ALU to the CPU may simplify the discussion. The CPU has address registers, data registers and some kind of logic unit. The only thing it can ‘do’ is fetch two operands, combine them in the ALU and store/test the result. Whether the computer has one or a dozen ALUs doesn’t change the process. There is no component or section of the computer that provides the kind of overview necessary for intelligence.
Is it different at the software level? Why would that be so? Programming consists of having a programmer (logician) divide a project into it’s minimum components then create a list of tasks that, if repeated tirelessly, will accomplish the goals of the project. At the Object programming level the programmer can invoke huge classes of software. The compiler accepts statements that are conversational for humans. Depending on the syntax of the compiler and the properties of the class, the programmer could define two pairs of lat lon coordinates and receive a list of coordinates for a great circle route in reply. When the list is compiled and the resulting program is loaded into a computer, the computer will produce the list. However, the computer never sees the Object level program. The list composed by the programmer is complied into a computer language. The resulting instructions do nothing more than present two operands to the ALU and test/store the result.
What Turing taught us is that such a simple process can produce amazing results. But it’s not intelligence.

Your description is sound but I think your conclusion calls for a definition of “intelligence” that you haven’t provided. How are the processes in the machine any different than what happens at the lowest level of the human brain to produce “intelligence”? (Not a rhetorical question.)

I’m not sure what you mean by “object programming level” but I guess you mean “source code” (object code is low-level machine code that is executed by the CPU, and I suppose you have heard of object-oriented programming, which is not related to object code).

The main difference between a brain and a computer (IMHO) is that the brain does dynamic hardware updates constantly. This can be simulated by software, however. So I would think a sufficiently complex machine with sufficiently complex instructions could be considered intelligent (though perhaps not conscious), for some values of intelligence. Whether humans will ever be able to design and build a machine and software at level of complexity and performance that remains to be seen (note in particular that I challenge the premise in the OP of “not if, but when”).

Actually, the BeepKillBeep official seal of approval is much appreciated! :slight_smile:

OK, a few things here. One is that I have the impression that you didn’t really read my post #65, and particularly the last paragraph, because it addresses this issue.

Second, your continued obsession with ALUs is a completely irrelevant distraction. No one cares at this level of discussion how CPUs are built. The real question is, can intelligence arise as an emergent property of computational methods, or alternately, can a Turing-equivalent machine exhibit intelligent behavior? The implementation of the Turing-equivalent platform is irrelevant, by definition of what “Turing equivalent” and “computation” means.

Now it’s pretty clear (again, post #65) that most computer scientists and AI researchers believe the answer is “yes” and that indeed such intelligent behavior has already been demonstrated and is rapidly advancing. It actually seems really hard to argue that when a system like IBM Watson outsmarts the smartest people in the room, that it isn’t exhibiting intelligence. Some might argue that apparently intelligent behavior really isn’t, because a computer is doing it, but that’s just a meaningless semantic quibble.

The more interesting question is whether there are inherent limits to what computational methods can achieve, which revolves around the question of whether a system performing syntactic operations on symbols can, at a sufficient level of complexity, eventually develop true “understanding” as an emergent property. This is what John Searle’s Chinese Room argument tries to answer in the negative, but it’s an argument that most computer scientists consider misguided and silly. That link is a good read if you’re interested. My own take on it is that the “systems reply” adequately demolishes the argument, and Searle’s attempt to refute the systems reply is deeply flawed.

One way to look at this issue is to consider the principle that a sufficiently great quantitative change in the complexity of a system yields qualitative changes in its fundamental properties. There is a species of roundworm with about 300 neurons in its brain. Few would say that this creature is intelligent or sentient. The human brain contains some 86 billion neurons, and a correspondingly vast number of synapses, and what a difference that makes! It’s not the neuron that’s magical, it’s the sheer numbers that lead to rich information content and powerful processing capabilities.

In the same way, a calculator with programmatic features that let you save a sequence of, say, 32 instructions including test-and-branch instructions might be able to do a few simple tasks, but one could make a convincing argument that whatever it did, it wouldn’t be exhibiting intelligence. The argument would sound much like the one you’re making, and here it would be right. But if you look at IBM’s Watson system for the Jeopardy challenge and the DeepQA framework it uses, it has about a million lines of code in 130 major software components, running on 90 IBM Power 750 servers providing 2,880 parallel execution threads threads and 16,384 GB of RAM stuffed with code and data. This is now something fundamentally and qualitatively different than our programmable calculator, not something that in any meaningful sense could be called “a simple process”. It’s not quite yet on the scale of a roundworm to a human, but we’re getting there. If one still quibbles about “real” understanding and “real” thinking, there is some future generation, some future iteration, of such systems where the quibbling will stop, because the evidence will not only be irrefutable, but we’ll be seeing real creativity, and the first signs of self-awareness.

What you are missing here is that many programs are not just a set of instructions to solve a problem, but a set of instructions that can adapt to solving problems based on new inputs and feedback from results. Chess programs are better at chess than their creators, and not just from being faster. Genetic algorithms have designed hardware that no one has thought of.
msmith537 notes that we don’t know why learning systems come up with the answers they do. You said you worked with neural networks - do you understand what they are doing after being trained? This seems to be a big problem, especially in things like medical diagnosis systems which need to get certified.
None of this proves that intelligent systems are possible, but ignoring adaptive behavior of lots of our systems doesn’t disprove it.
My Android phone has learned my most common text messages, with no input from me except typing them. No one at Google wrote code for me - the code they wrote learned.
Just like we do.

This is actually my thesis area. DARPA has a whole thing going on right now about "explainable AI’, or AI that can communicate and explain its decisions, especially in an acceptance testing context (“can I verify the AI isn’t falling for obvious red herrings”). There are a few issues with the idea, notably that I think sometimes it’s okay if we don’t understand why an AI does something as long as its results are better, and forcing it to explain it may hinder progress if we’re incapable of understanding its logic. However, in general I like injecting a little comradarie with our AIs and giving us tools for understanding how they think and also understanding where we may have gone wrong in designing their training data.

That is a very good thesis topic!
We had a special issue a while back on adaptive hardware - that is chips or systems that change their design in the field to adapt to either failures or new workloads. I wondered about how you would be able to verify these, since verifying hardware that does not change on the fly was bad enough.

Wolfpup,

I am preoccupied with testing the output of the ALU because it is what I deal with daily when I write computer code. The computer doesn’t do anything else.

RE your #65 - IBM does not claim that ‘Watson’ is intelligent. Watson’s performance is the result of some very clever programming. But, functionally it is a search routine with novel methods of input and output. Any perception of intelligence is contributed by naïve observers.

Today the mail lady delivered a package that is germane to this discussion. The package contained ink cartridges for my printer. It was a complete surprise. The printer determined that it was low on ink and of it’s own volition went on-line and ordered replacement cartridges, charging them to my AMEX card.

This was an intelligent act based on a judgement made by a processor within the printer. Is my printer intelligent?

That’s a very timely subject. I’ve been reading a few papers on this here and there, and is definitely something that I know the AI community is interested in. I hope you come up with something great because it really would be very valuable! Good luck to you!

Watson isn’t intelligent, but it is a bit more than a search program. Knowing the answer to Jeopardy questions is only half the problem. Figuring out the actual question is the other half. That language understanding is the real advance of Watson. Any idiot with Google can look up who is buried in Grant’s Tomb.
I was on the show (along with it seems half of all Dopers) so I’m pretty familiar with how it works.

The difference between Jeopardy and a standard quiz show is like the difference between simple American crosswords and English cryptics. The answers are about the same, but the trick of the English ones are figuring out what the clue is.

Another example is language translation. Do you think that is a simple lookup? That translates out of sight, out of mind to blind and crazy. You need semantic understanding of the material to be translated to do anywhere near an acceptable job. That isn’t simple arithmetic.

Not true.

“Novel method of input” means deep natural language understanding, picking up on cryptic clues that may be puns or jokes. “Novel method of output” means reliably assessing a confidence level for answers and executing optimum game strategy. Whether the resulting intelligent behavior is “real” intelligence becomes a semantic argument, but there’s strong support for the view that it is.

Is a roundworm intelligent? You appear to have missed my argument about the qualitative significance of scale. The roundworm has about 300 neurons in its brain, a human has close to 90 billion. But their individual functionality is basically the same. Intelligence isn’t a magic threshold, it’s a continuum and emerges from scale.

all of the possible board patterns???

Bullshit.

No human has the ability to internalize the more than 10^70 possible board patterns.

They internalize the several most likely patterns, extrapolate which patterns could follow up on those, and try to use those solutions that they believe the opponent will not see in time.
How many “several” is varies per player, and is the primary determinant of skill.
For a rank amateur, this number is single-digit.
For a very, very good amateur, this number is several thousands.
For a Grand Master, it reaches low millions. (good enough for maybe 10-12 moves extrapolated ahead)

Wolfpup,

Re your #90 - Where does IBM make the claim that Watson is intelligent?

Computer voice recognition is an awe inspiring achievement. To attribute it to computer intelligence is akin to attributing events to acts of God. The earliest voice recognition patents date to around 1960. It took a lot of years of hard work. However, the result is just a list of code. The computer does not understand the words. And, it does not ‘know’ the answers to the Jeopardy questions.

Intelligence is the ability to generalize. A human Jeopardy contestant could discuss the relationship of two answers he/she gave during the program. Watson could not.

MarvinKitFox,

Clearly ‘all’ was overstated.

There are some interesting discussions in the chess forums. The various strategies, ‘gambits’, of chess are known and named. These are followed by chess programs.

Not strictly related to this conversation, but may be of interest to some.

Thanks for the link.

AI is a useful technology.

What do you suppose the “I” in “AI” stands for?

This is not about voice recognition. That was just a suitable input medium for Jeopardy. It’s more fundamentally about understanding the question well enough to construct a productive search strategy and evaluate the results – i.e.- to understand the question well enough to reliably answer it. This is all the more remarkable when you realize that Jeopardy routinely couches its questions in puns, double meanings, and other clever wordplay. Given that Watson can see through this wordplay to the real meaning – in cases that even some humans might not – and act on it to produce the desired results is “understanding” in any rational, standard meaning of the word.

That’s a very non-specific objection, and not very meaningful. It’s saying that Watson is not really intelligent because it can only play Jeopardy. In fact that isn’t quite true, since the Watson DeepQA technology has been redeployed to practical applications in other areas, primarily by domain-specific retraining. But no, generality is not the essence of intelligence, it just tends to be the nature of human intelligence because we are generalists, and it’s both a strength and a weakness. For the foreseeable future – but not forever – machine intelligence will be specialized and confined to specific problem domains. But in those domains it will be far superior to human intelligence.

Wolfpup,

Your link hardly offers “strong support”.

Watson’s Jeopardy program is a best match algorithm. Perhaps a stunningly theatrical one, but never the less a best match program. All distinctions between syntactic subtleties were resolved before the program was compiled.

The concept of quality being a function of quantity applies to aerodynamics not search algorithms.

Wolfpup#96,
“But no, generality is not the essence of intelligence” - actually it is. Intelligence is the ability to make inferences in order to resolve unfamiliar information. To generalize from experience. A neural net is trained with one set of data and tested with another.

BTW: The I in AI is just a label. “The map is not the territory”(AK)

Wolfpup#96,
What component of Watson “understands” the question? Where, in the computer is the question addressed as a whole entity?

Not that I think Watson is a strong AI, but what component of you understands a question? What component of me understands it? Where in your brain is the question addressed as a whole entity?

It seems to me that intelligence is an emergent effect of the operation of your brain. At the lowest level, neurons, there’s no intelligence. There is no intelligence in the connection between two neurons. Somewhere along the way, after layers of abstraction, we agree that there is consciousness and intelligence. There’s no intelligence at the lowest level of a computer, but you add assembly language, an operating system written in C, higher level programs written in LISP or Haskell or whatever, learning programs on top of that, years of training and learning, and, at some point, in the future, you may get real intelligence.