Is Making Intelligent Computers Such a Good Idea?

To call Watson a “best match” program is such a vapid over-simplification that you may as well apply it – with equal validity – to how a human plays Jeopardy. It’s on a par with your equally wrong assertion that chess algorithms work by knowing all possible board positions in advance. As I just finished discussing, Watson’s parsing of the often-tricky Jeopardy questions alone constitutes real understanding in any rational meaning of the word, and that’s just the input module. When IBM spent over $1 billion on Watson as an advanced AI research project, it wasn’t to do “pattern matching”.

Your last sentence is a completely garbled misstatement of what I said. I never made any statement about “quality being a function of quantity” and nor was I talking about search algorithms or anything specific. The statement was that “a sufficiently great quantitative change in the complexity of a system yields qualitative changes in its fundamental properties”. The case in point here was the difference between a roundworm (about 300 neurons in its brain*) and a human (about 90 billion neurons). This is why, as AI systems grow in complexity, they acquire completely new emergent properties, and the technical minutiae of the underlying platform becomes as irrelevant as the fact that the roundworm neuron and the human neuron are essentially the same.

  • The roundworm C. elegans has 302 neurons, whose 7,000 synaptic connections have been completely mapped out and were uploaded into a Lego robot body which – no surprise – then proceeded to behave like a roundworm in response to stimuli. If we could do the same with increasingly sophisticated brains we would observe corresponding levels of intelligent behavior, notwithstanding that it was running on a digital platform. AI simply seeks to accomplish more practical goals using different and more pragmatic approaches.

Can a computer that is “intelligent” but not self-aware be a danger? GIGO and all that.

Here’s a bit of a thought experiment. Self driving cars will need to demonstrate an extraordinary ability to generalize and adapt to road and traffic conditions to drive safely. However, until I have to worry about my self-driving car taking off in the middle of the night and getting into adventures like Lightning McQueen, I would not call it “intelligent” in the way that a human is. It’s all still very sophisticated processing to solve for a directed task.

IOW, I make a distinction between very sophisticated automation and true agency. Not that high levels of automation can’t be dangerous as well, without proper safeguards.

And once again, a great debate, has fallen into a disagreement on the semantics of a word. :slight_smile:

I always suspected you were anti-semantic.

RitterSport,

I agree that technologists will emulate the brain and possibly create a self-aware computer. I believe it will be a parallel architecture using both analog and digital processes.

The human consciousness creates solutions to problems as concepts, not as parsed syntax. A patentable problem solution is one that does not appear in any data base as a patent or publication. A close match requires the inventor to explain the difference between his solution and the close match. Intelligence enables the inventor to do so. A serial, numerical computer cannot.

<beep> kill the semantics? Proceeding. <beep>

<nuclear launch detected>

Now see what you’ve done.

True. A truly competent AI will find ways to achieve its own goals in a way that humans either don’t know about them, or the humans think the AI is acting in humanities best interests.

Someone once said writing fiction about a superintelligence is easy. You just have to write a story where a lot of random occurrences end up being done by the AI to achieve its goal. A true superintelligence would know how to achieve its goals in ways that make humans think that these are just random, unconnected events.

Also AI can bootstrap, and pretty quickly a human trying to figure out AI would be like a squirrel trying to figure out quantum computers. Its not possible.

To debate whether intelligent computers are a good idea, the meaning of “intelligent” must necessarily be agreed upon.

Oh I agree, but I don’t think it is possible to get an agreement on it. That’s the problem. There’s still much mystery about what is intelligence. And if you look at the debate it is pretty clearly a split based on how each person views “intelligent”.

Wolfpup#101,
“sufficiently great quantitative change in the complexity of a system yields qualitative changes in its fundamental properties”
Perhaps so, but nothing requires the fundamental changes to yield useful results.

To risk stating the obvious, by its nature a hypothetical sci-fi-type general-purpose superintelligent AI always has a deep and fundamental understanding of human emotion (and human everything else) inconceivably beyond that of humans themselves. This is a philosophical ideal compared to the current state of the art, though, and kind of hard to discuss without knowing anything about its inner workings.

The problem is that “true agency” is an ill-defined nebulous concept, just like “consciousness”. Whereas “intelligence” can be reasonably defined in terms of behaviors, though we have folks who keep moving the goalposts and, in effect, declaring that whatever a computer does can’t be intelligent because, well, it’s a computer. To take your specific example – not that I would argue that a self-driving car represents strong AI – but the reason you don’t have to worry about the car taking off by itself is that no such goals exist in its operational domain because we don’t want them. They would most likely be useless and counterproductive. But there’s no inherent reason that it couldn’t be coupled with such an AI, which had the goal, for instance, of racing down country roads with other cars and trying to outrun the constabulary. The “agency” could be as powerful and open-ended as one might want to make it, but it would not be very useful, so no one thinks of doing these things. The fact that human drivers are generalists is actually a good example of why AI will be superior.

This is quite obviously false because the generality of a Turing-equivalent digital computer allows it to simulate any analog process to any arbitrary degree of precision. Whereas the reverse is not true, which is why analog computers that were popular in the 50s aren’t around any more. There’s nothing they can do that digital computers can’t do better, faster, and more accurately.

Indeed its generality allows a digital computer of sufficient capacity to simulate any physical system, including neural nets should that prove to be useful. In the simple example I cited, the complete synaptic connectome of a roundworm was simulated in a Lego robot. This is not likely an efficient path to a self-aware strong AI, in the same way that an efficient path to flight was not in the emulation of birds, but it addresses the failure of your argument that a digital computer can’t do it.

This is false, too, as it’s belied by the evidence that many aspects of human cognition internally take place as syntactic operations on symbolic mental representations (see, for instance, the work of Hilary Putnam and Jerry Fodor on the computational theory of mind).

This might be called the fallacy of digital precision, that a computer can’t deal with vague, ambiguous, or loosely defined concepts. Yet in a very important sense this was the very nature of the Jeopardy challenge, where the clues were often vague and tricky and led to different possible interpretations and many possible answers. Its success at doing just that was Watson’s great triumph. Plus, those answers in turn had to be assessed for confidence level in loosely-defined ways that, in a human, would be ascribed to intuition.

That isn’t the point. The point is that as a computational system grows in complexity, at some point in its systematic evolution it develops new emergent properties that are not found in any of its component parts, but which exist in the synergy of the total system, making the system greater than the sum of its parts. Those properties include intelligence and self-awareness.

Let me hazard a prediction that self-driving cars will never have true agency. It makes no sense to give them it. Anything made for a specific purpose should be optimized for that purpose. That’s why industrial robots don’t look like people.
Fiction has self-driving cars, like Kitt, acting like people because that is easier to write.
We’d have to find a profitable use for a computer with agency before anyone outside of NSF invests in creating one. Even if we knew how to make them.

I already mentioned innovative hardware designed by genetic algorithms. I don’t know if anyone patented them, but I bet they could.
Clustering techniques used in data mining find matches by grouping objects with n characteristics spread in an n-dimensional space by various criteria. Different criteria would produce different groupings, but humans do the same thing. And you can explain the grouping by showing which of the factors were most alike.

Wolfpup#113,

“The point is that as a computational system grows in complexity, at some point in its systematic evolution it develops new emergent properties that are not found in any of its component parts, but which exist in the synergy of the total system, making the system greater than the sum of its parts.”

That’s an awful big universal affirmative. Computers today have greater utility than they did when a UART was the size of a Volkswagon. But computers still perform the functions of a Turing Machine:

  1. Get some data from memory
  2. Do something to it
  3. Put it back in memory

Computers are small enough, fast enough and cheap enough to have broad commercial application. This is a result of semiconductor planar technology not complexity. Semiconductor memory is far less complex than a magnetic core array. The emergent property of these computers is that they have become a universally useful tool whose end application is not determined at the time of manufacture.

“Those properties include intelligence and self-awareness.”

You may be correct, but I do not believe they will emerge from current numerical architectures.

Voyager,

Interesting concept for hardware. Do you have a link?

This is not wrong, but doesn’t quite properly elucidate the relevant issue. What computers really do that is relevant here is retrieve symbolic information, derive semantic meaning from a broad context of states, then respond with new information or actions, such as Watson vocalizing a response. The problem with your fixation on the underlying processes is that these seemingly simple events, aggregated in the trillions in mere microseconds, really do lead to true intelligence and indeed, as I noted earlier in reference to the computational theory of mind, that’s how our higher-level cognitive processes work, too.

This is, again, going off the rails into misunderstanding and is therefore deceptive, and reflects your fixation on hardware minutiae again. I have no idea if semiconductor memory is in some sense simpler or not; it has certainly become enormously cheaper and easier to make than core memory. But that is not at all what the argument is about. The salient fact is that when core was the prevalent type of memory, half a megabyte or maybe a megabyte or two was considered major big memory – and that was for multi-million-dollar mainframes. Now your desktop computer at home or phone or tablet can have multiple gigabytes. IBM Watson had 16,384 gigabytes. That is the complexity paradigm I’m talking about – what it has done to enable the synergistic complexity of software systems.

No, Wolfpup, computers do not retrieve symbolic information and derive semantic meaning from a broad context of states. You are describing what programmers do.

Your post is confusing size with complexity.

The largest core array I was aware of in 1957 was the 8K by 32 bit at Rand corporation.

How do we know that what human intelligence does is notably different from this? And if we do know, what’s stopping us from making computers that just do that instead?