Artificial Intelligence - Possible or Probable?

Yes, really. The best computers we have today can’t compete with the intelligence of even a mouse. We’re close to insect level. But, insects aren’t vertebrates.

I graduated with a CS degree in 1979 and took a course in AI. Although I have not seriously studied the AI field since then, my casual observations lead me to think that there has been little progress since then, and certainly no breakthroughs, towards a man-made non-biological device with “perceived capability of independent thought.”

There hasn’t been much progress towards strong AI, but there’s been a lot of work on intelligent systems and machine learning. Nowadays, intelligent systems are literally ubiquitous, and it’s difficult to describe the impact of machine learning research without sounding like I’m exaggerating. Tom Mitchell has a nice whitepaper (pdf) that describes the field and some of its impact.

True, I do believe a computer is going to pass the Turing test, but if it happened today I wouldn’t believe it was doing anything more than brute-force guessing its way through billions of possibly correct answers, but not displaying true intelligence or understanding.

Like QED says, we can barely understand and duplicate an insect brain. Which is cool, but insect brains are on a different branch of the evolutionary tree. When we understand and can duplicate how the simplest mammalian brain works, we’ll be on to something. I’d expect that would be in the news, and the sudden appearance of a truly intelligent, learning and understanding AI wouldn’t just appear tomorrow.

Once we truly understand how a shrew or mouse brain works and can duplicate it, then I do think we can let the incredible speed and processing power of modern computers do in a short period of time, what evolution took 125 millions years to do, which is basically just trying out random mutations and keeping what works, towards the path to intelligence. Once we figure out that first mammalian brain, computers should be able to try out 125 million years of random mutations pretty quickly.

Why attempt to copy the human brain? Our airplanes do not fly like birds do. It would be much more productive to look at ways to complement human reasoning (expert systems). Technically, much easier, and potentially just as valuable.

More ethical too. I’d feel better about a super chess computer that could figure out novel solutions to our problems, without worrying if it were conscious.

Can you prove that a randomly chosen person displays true intelligence or understanding? It’s remarkably more difficult than you might think.

Well, then it’s debatable whether there is something like ‘true’ intelligence or understanding, at least as determinable from the outside – if nobody could say with better than chance success whether they are conversing with a human or a computer, then the two have to be equated in intelligence and understanding, at least when viewed from the outside. It might well be that the computer able to do so still isn’t in any way conscious, but then, strictly speaking, the same qualification applies to your fellow humans, as well.

Oh, I’m not sure we should view the mammal brain as all that special – while it’s demonstrably one structure that leads to the development of intelligence, that doesn’t mean it’s the only one. Indeed, if somebody came up with a giant spreadsheet that correlates every possible input with a reasonable output, in the ‘outside view’ definition, that construct would be intelligent, as well.

But this is all probably veering too far into GD territory.

Then again, a mouse can’t compete with the intelligence of even a rudimentary computer, either. While I don’t doubt that we’ll eventually have a computer that can do everything a human can, there’s very little incentive to push in that direction. If I want something that can carry on a conversation with a human, I can hire a human. But if I want something that can perform multi-dimensional nonanalytic integrals quickly and reliably, for instance, hiring a human to do it isn’t really an option, so there is an incentive to get a computer capable of doing that.

In fact, the human brain (or those of any other animal, for that matter) probably cannot even be considered as Turing complete except in a very restricted sense. This is not to say that the brain is not an incredible biochemical data processing and interpreting system, but it is fundamentally unlike the semiconductor microprocessor and support system sitting on your desk in many salient ways. Cerowyn details this extensively so I don’t see any reason to reiterate, but it should suffice to say that we have only the most tenuous grasp on what is necessary and sufficient for human cognition, much less build a computer, operating system, and unities that could be indistinguishable from ‘natural’ human intelligence. The Turing test is really a very crude, somewhat subjective, and high level assessment of intelligence. Even if you could build a machine that can pass a Turing test based upon a statistical blind evaluation across a representative human population, it wouldn’t prove that the machine is intelligence, only that it meets the criteria for Turing’s evaluation of communication capability. On the other hand, a machine could have legitimate synthetic intelligence and be completely incapable of interacting on a natural language level.

When we do achieve genuine synthetic or artificial intelligence, I predict that both the hardware and software will be indistinguishable from the nervous system of a living organism in structure and operation, and it will look nothing like your Xbox.

Stranger

In what ways do you think the intelligence of a mouse can’t compete with a rudimentary computer? Are you thinking number crunching?

I think Turing machines were only mentioned with reference to passing a test for intelligence. The question of interest seems to be “can a machine be built that demonstrates human-like intelligence,” and based on the existence of human brains I can’t see how it would be reasonable to answer in the negative.

Sure, why not?

Really? What restricted sense do you mean? I mean, modulo the same lack of infinite memory that we forgive in many other things we call Turing complete, it certainly seems to me that a human brain is, with time and effort, capable of performing a simulation of many Turing-equivalent systems. Conway’s Game of Life springs to mind, for example.

I definitely get your point about the brain being fundamentally unlike electronic computing systems, but from a strictly theoretical standpoint, it seems to me fair to say that the brain is Turing complete. However, if you know of any good arguments on the matter I’d be very interested in them. I don’t know of any academic investigations into the matter, and I didn’t really look into the Turing completeness of the brain while I was in school, so I’m just going by my intuitive impression here.

The brain is definitely Turing-complete. Given an infinite surface on which to scribble stuff, we can follow the rules that define a Turing machine.

So, we know the brain can do anything a Turing machine can do, albeit slowly and rather unreliably. The question is whether a Turing machine can do anything the brain can do.

I’m not into this “outside view” thing. If it walks like a duck, and quacks like a duck, that’s great in deductive reasoning, but not so great when I can create waddling, quacking robots.

Well, it’d be easy enough to cut open one of your robots to see that its insides aren’t that ducklike at all, but how do you tell a machine – or human, for that matter – that seems intelligent (through producing apparently intelligent responses to outside stimuli) from one that actually is intelligent?

Well, couldn’t we also look at this computer’s programming, and see how it’s going about its answers? What would be interesting is if we looked at its programming and discovered it had “evolved” its programming in ways we no longer understand.

Sure, we can see what the computer’s programming looks like. But that’s not enough. If the question is “does the computer work the same way as a human?”, then we also need to see the human’s source code to be able to compare them.

But number crunching isn’t intelligence. I can multiply two big numbers on a calculator quicker than I can in my head (and much quicker than a mouse could), but the calculator itself isn’t any more intelligent than a stack of paper multiplication tables is. The calculator doesn’t grasp the concept of numbers, or know what multiplication actually means.

A mouse, on the other hand, can grasp concepts (albeit basic), and can truely recognize patterns. It’s vastly more intelligent than any supercomputer ever created.