Is Making Intelligent Computers Such a Good Idea?

Wolfpup,

Thanks for the discussion.

  1. “cognition operates on symbolic computational principles” OK and computers can implement the process, but they do it by passing binary words through the ALU not by directly overlaying symbols. Micro-coding can reduce the number of instructions required but it does the same thing.
    No - it’s not a matter of interconnection of logic components. Everything goes through the ALU. It may be micro-coded for multiple passes to implement a Multiply or something similar but it still does nothing more than ADD, AND, OR ,XOR. The ALU is the lowest common denominator of all computers.
  2. Computer controlled systems may exhibit wonderous events, but the computer portion is just the ALU and it’s memory. The computer has no overview of the process or it’s symbols. The idea of the computer doing “Intelligent things” is a value judgement of the observer. The computer is “doing” only the four operations of the ALU.
    The CPU has only one ALU (a few more in multi-cores). Each brain neuron is an ALU because it provides a testable output that is the result of an array of inputs passing through it’s transfer function. Commercial computers today are running as high as 10 cores. Each core has an ALU. That’s a long way from the number of neurons in the brain.
  3. Modeling cognitive functions will lead to a better understanding of the brain, not thinking machines. Computers do not “play” chess. Computers shuffle data, test the carry bit and display previously stored messages. No matter how fast or how often you do it, that is not thinking.

:dubious:

That is not what I understand how chess works, if all positions are previously stored that would be a feat in itself or we could declare that chess is solved. Computers do store best replies for the beginning of the game, but eventually games do reach positions that have not been stored before.

The situation is more complicated with Go, and yet a computer just beat the human Go master recently.

Now, one can agree that that is not thinking as we know it, but then again your argument here reminds me of how different the approach is to design jet planes from trying to imitate how birds fly. Engineers recognized early that how nature gave flight to the birds is not effective when designing air planes. Because there are no supersonic birds…

I do see that computers will become intelligent, not in the same way as humans. It will be a good idea to make intelligent computers IMHO, but it will also depend on what humans teach AI to do.

[sub][The good robot Astroboy destroys an evil robot that was going to kill Astroboy’s human friends]
Evil Robot creator: “Ooh you destroyed my robot!”
Astroboy: And it was your fault for making him bad rather than good![/sub]

Voyager,

A hardware multiply is just a shift and add array with a dedicated ALU.

Cache memories, and instruction ques with pre fetching, speed up the process, but the ALU is the object of all instructions. Perhaps a quibble is allowed for the MOV instruction, but that’s just a NOP.
A CPU consists of a group of registers that are connected to an ALU. If you remove those registers and the ALU, you no longer have a CPU and hence no computer. Size is not the issue.
Perhaps you have designed a computer that functions without presenting operands to an ALU. Please share it with us.

GIGObuster,

Good points.

The misapplied term ‘artificial intelligence’ implies mysticism.

Computers are destined to do amazing things but that needn’t imply ‘thought’.

I’m sorry but all of this is just hugely missing so many fundamental points about computational cognition and AI that I barely know where to start. And analogizing a processor core to a neuron is just madness.

Your characterization of AI – or rather, the attempt to denigrate the “intelligence” part as non-existent – is so archaic that it was debunked at least back in the 1960s if not earlier. A philosopher and AI skeptic named Hubert Dreyfus reasoned that since computers could not in any practical sense look very many plies deep into the chess search tree, thus eliminating brute-force approaches, and obviously couldn’t “think” and strategize like a human, it would be impossible for a computer to play anything but a mediocre game of chess at best. He made a bet with the MIT AI lab that a computer could never beat him, so a clever programmer there named Richard Greenblatt built a program to do just that. And it did. While today Greenblatt’s MacHack program is obsolete and can be beaten even by a humble Android tablet, it remains a landmark achievement and its aggressive and often innovative style of play is still often impressive. Dreyfus didn’t understand AI then and I don’t think he ever did. Computers have in the meantime become chess grandmasters.

But what happens whenever computers achieve another milestone in AI capability is that it’s suddenly declared by the skeptics to be not “real” intelligence, because a computer did it, and because they think they have some semblance of understanding of how it worked, as if it was some kind of magic trick.

This is the fallacy of John Searle’s Chinese Room argument, which we’ve discussed here many times. Your arguments evoke all the classic fallacies of that argument, failing to grasp that just because the components of the system are not individually intelligent doesn’t mean that the system as a whole is not. All intelligent systems, in fact, including ourselves, are built up from “dumb” mechanistic components.

There’s only one objective way to define “intelligence” or “thinking”, and that’s to specify a task that everyone generally agrees requires those attributes in order to complete successfully, and then see if the person or computer can do so. IBM took on the challenge with the Jeopardy TV game show, where it was generally agreed that the people who won were pretty smart, not just because they knew a lot of facts, but because they also had to interpret trickily worded questions, strategically place bets, and assess their level of confidence in their proposed answers. As you know, the IBM Watson program beat the human champions by a pretty wide margin. Spinoffs of that technology are now being taken to practical applications like complex medical diagnosis and business strategies. It’s a great example of AI starting to move into serious areas of human intellectual endeavor.

An ALU has a very specific meaning. Hardware multipliers are not ALUs, in that they don’t have the full functionality of even a primitive ALU. I took my Computer Arithmetic class a long time ago so I’m not up on current multiply algorithms. They probably do use shift and add, but they don’t use a general purpose ALU to do it. That would be very slow.

A move instruction is hardly a NOP. The instruction scheduler for a modern CPU (by modern I mean last 25 years or so) would not schedule anything for the ALU, and, assuming no data conflicts, would schedule an arithmetic instruction which took multiple pipeline cycles to overlap with the Move.
But I wasn’t thinking even of move. Branch instructions also do not use the ALU.
Move instructions accessing memory are a lot more complicated, of course, and also do not use the ALU.

I never said that there is no ALU. I said that the ALU is a very small part of a processor. In your definition of a CPU, you left out instruction fetch. I can conceive of a useless computer which has no ALU - just instructions to move data around between memory and registers. A computer with no instruction fetch isn’t going to be doing anything.

May I ask where you get your information about what a computer does? It seems to me like the block diagrams they used to show in CS101, which stressed ALUs because people thought of computers as fast adding machines. It is nowhere near the Hennessy and Patterson level.
This is all relevant because it is not surprising that those who view computers as fast adding machines “know” that they can never become intelligent. Computers are far more.

Chess programs, at least back when I was paying attention to them, are basically search algorithms. The search space is the tree of all possible moves, which is not generated as a while but only a few moves ahead, and the smarts of the algorithm is pruning useless branches of the tree and being efficient enough to go down enough levels to find tricks not obvious until five moves ahead.
The benefit of chess algorithms is that similar heuristics are applicable to a lot of other areas. AI has come up with lots of good heuristics which other people use.

Storing openings and end games are done for efficiency, but it isn’t really necessary.

Incorrect. Modern processors can and often do have multiple ALUs, and instruction scheduling hardware directs an instruction to use ALU2 if ALU1 is being used by another instruction. Since ALU operations are relatively slow, they often take more than one clock cycle to complete, all the more reason to have multiple ALUs. You can have multiple multipliers also.
How many depends on the tradeoff between area, scheduling complexity, and how fast you want the CPU to handle benchmarks. Instruction level simulation is done to find this stuff out long before anyone writes any Verilog.
ALUs, by the way, by definition can do both logic and arithmetic, and are more than one bit. Neurons, wonderful as they are, are not ALUs. And ALUs are not neurons.

I can’t feel much concern about artificial intelligence doing something evil, but I see a lot of potential problems in AI doing something stupid, quickly, i.e. blithely following its instructions and crashing the stock market or being fooled by a ray of sunlight and driving the automated car off a cliff, and doing so at a pace faster than any human can react and intervene.

Voyager, Wolfpup,

I was addressing Wolfpup’s point of minimum level of granularity. All computer decisions are made by testing the result of some operation that involves the ALU. My point is that these are mundane, serial operations on numerical values. The computer has no ability to view the aggregate with anything approaching ‘thought’.

This does not violate any grandiose conjecture. It’s just that the first thoughtful computer won’t be an adding machine.

4 yrs USAF analog computers, 11 years IBM Advanced System Development Division, 23 years National Semiconductor microcontrollers, neural nets,

It’s like the “Paperclip Maximizer” thought experiment. Basically a superintelligent AI is told “maximize paperclip production” and then proceeds to convert all matter in the solar system into paperclips.

There are lots of decisions made below the instruction set level by comparisons and tests that don’t involve the ALU. I assume you are familiar with interrupts. That’s one. Processing of high speed serial I/O is very complicated, but does not involve the ALU.
I’m familiar with microcontrollers - my group had a project generating simulation models for them. To save cost and area they don’t have the architectural features that high end microprocessors have. One ALU is plenty for them.
And I doubt AI will run on microcontrollers.
In any case there is no reason that run of the mill computers can’t become intelligent with the proper programming - programming that no one knows how to do yet. And they might have hardware add ons to help, just like the Belle chess playing computer had custom ASICs to accelerate its chess playing.

The first AI needs someone standing by it with a ruler, saying “no, bad AI!” when it tries tricks like that. Kind of like the plugs in light sockets you need for toddlers.

I think the problem that’s worst for PR is that AIs are by nature different from humans in how they operate, and I think that will hold true even if we get general AI. So even if a domain-specific AI (say, a self driving car) is better than a human in 99.9% of cases, it may have a tendency to screw up in rare situations a human thinks are self evident or easy. And then, even though the AI is in general more proficient, it goes into “see!? It made such a stupid mistake! How can we trust it!?” Like your ray of sun example.

Posit a car that never crashes except 0.00001% of the time when random system noise just causes it to go haywire. Even though it’s effectively perfectly safe, the fact that we can’t define exactly when it goes wrong, or the reasons it goes wrong seem silly (“a ray of sunlight made the cliff look like the road”) scares people more than the real risk factors.

As Jragon said - it doesn’t need to be perfect, just better than we are. Humans can get tricked by rays of sunlight and drive their cars off cliffs. An automated car should be acceptably safe when it does this demonstrably far less frequently than humans do.

I’m more concerned about… not exactly evil AI, but more sociopathic AI - the machine that decides (or acts exactly as though it had decided) that humans are just a suboptimal use of carbon and other resources - for a General Intelligence, constraints are damn near impossible to ‘program in’ because (in order to be truly adaptable) the AI needs to be able to modify and improve itself - and that improvement might be a modification in the way that it regards humans.

First of all, you misunderstood my comment about minimum granularity. I was speaking from a logical (i.e.- software) standpoint, not the standpoint of what the hardware and logic gates are doing. From a logical standpoint the essence of computation is syntactic operations on symbols, an insight that was brilliantly elucidated by Alan Turing when he developed the theoretical concept of the Turing machine.

Your focus on the underlying electronics seems to have blinded you to this basic understanding. A computer is not a calculator, even if they use the same kinds of logic gates. The fundamental lesson of the Turing machine concept is that a completely new kind of entity capable of completely general computation comes into being when the device is capable of executing a stored program of instructions that operate on stored data. And the profound observation that comes from cognitive science is that these types of syntactic operations on stored symbolic representations is very much like how human cognitive processes work.

A calculator can only calculate, but a Turing-like computer can, as IBM has demonstrated, beat the best humans at Jeopardy, play chess at a grandmaster level, perform medical diagnoses, and eventually beat us at virtually any intellectual challenge. Please read my post #65 again, as it addresses the various fallacies you keep bringing up, which are old and long debunked.

And the corollary to that is that we’ll have such a dependency on these systems that it will be difficult for us to do much about it. This is already the case. Imagine the modern world suddenly without computers. Civilization would collapse.

The old adage used to be that you can just turn the computer off if it does something bad. Well, you can’t. I think we’ve all been in the situation where the customer rep in a large organization is trying to do something for us but says, “the computer won’t let me”. Try telling him to go in the back room and unplug it. It doesn’t work that way. Data centers and the computers in them are now so vital that some institutions, like major banks, protect them with literally military style defenses.

Of course if one has a highly capable general AI smarter than any human, who better to give control of those military defenses to than the AI itself? :wink:

The thought goes that once you create an AI that intelligent, WE become the toddler. IOW, it quickly learns how to circumvent or manipulate humans to achieve it’s end.

I think the biggest problem is that we don’t know how an AI would think. Popular fiction tends to portray AI as acting very close to how humans act. Usually as a glib, sarcastic Pinocchio or a weaponized sex robot. In reality, I suspect it would act in a manner that is totally alien to us.

If we extrapolate from computer chess, there is no characteristic “way that an AI thinks”; it varies with the implementation. There are stark differences in playing style between different programs, for example. The MacHack program I mentioned earlier is rather unique this way. In my limited experience most chess programs, whether weak or strong, tend to be somewhat conservative in their play, but MacHack seems to have a sense for a weak player and goes for the kill very quickly.

And it is, indeed, usually alien to the way a human thinks, even when it’s clearly superior. Most good chess players, although they can articulate the basics of good chess strategy, can’t really explain how they “see” a board position. Whatever it is that goes on in our minds, AI does it differently, and for the most part it does it better. This, I think, is going to be the fundamental paradigm of AI.

This is precisely and 100% correct, and gets the BeepKillBeep official seal of approval (sadly, it is far less prestigious than it sounds).