What is the state of AI?

This thread got me thinking about artificial intelligence. Well, not thinking about it; just mildly curious.

What is the state of AI? I remember back in the '80s hearing about programs that can “learn”, and apply the new data to different problems. I’ve heard about “fuzzy logic”. There are robotic “pets”, such as Sony’s dog robot, and Honda has “Awesome-O” :wink: There are robots that behave like ants or cockroaches.

But how far away are we from HAL? Are there any computers/robots now that have self awareness? How would we know? That is, a human being can look at another human and say, “That person is self-aware.” But if our brains are just electro-chemical computers, then how can we “know”? “I think; therefore, I am,” says Descartes. How much does something have to think before it is self-aware? Can we say that a computer/robot isn’t self aware just because we can terminate the program? If so, then how about a dog?

What is the standard for sentience?

Ooooh, jeez. Is “there isn’t one” an acceptable answer?

Truth be told, as amazing as Asimo is, I think it’s quite safe to say that we’re quite far from producing (much less even defining what qualifies as) sentience. Not to say that there isn’t a lot of cool research being done (ACT-R , SOAR, Cog and Kismet, and so many others). Seems to me that Ray Kurzweil and his ilk are way too optimistic.

To be honest, we’re not much further along than we were 20 years ago. The big problem is that computers are very fast but sequential, whereas brains are very slow but massively parallel.

When I took AI, in 1972, it was all going to be solved in ten years. One of the books we used was from 1959, and they were convinced that their powerful computing power was going to solve everything in about ten years. When the Intel 386 was announced, with gasp 16 bit words, that was going to enable AI. (This was from that noted computing journal, USA Today. :slight_smile: )

As far as I can tell, they’re nowhere even close. There is better pattern recognition, speech synthesis, a good set of algorithms and heuristics, but nothing like a real theory of intelligence. Nobody seemed to have realized that more computing power does not automatically equate to better programs. It just allows you to run the stuff you have faster, and speed was never the crucial issue.

My guess is that a brain simulator, with some way of copying the interconnection of neurons from a living thing, will be able to exploit the greater amount of computing power we have without the need for understanding how intelligence works. If the simulation shows intelligence, then it will be easier to figure out what is going on. Of course being able to do a worm would be a good start.

Just curious, not trying to hijack or get this moved to “Great Debates” - do you really think that’s the problem in principle?

Yeah, I do. Trying to describe fundamentally parallel processes with a sequential language just isn’t going to work all that well.

State of AI?

Hmm, must be one of the red ones, seems like most of the ones that start with A are lined up behind George. Lessee, Alabama would be AL, Arizona would be AZ, Arkansas would be AR, Alaska is AK…

Come to think of it, I bet it’s one of them damn island territories. That would explain the “I”.

And it is not even true any more. Any decently large computer, even a PC, has lots of processors and is doing lots of stuff in parallel. Even deep in the heart of the CPU lots of stuff is being done in parallel or out of order, and new multi-threaded machines are doing lots of different job streams in parallel. Sure, we might access stuff from our memories in parallel, but database searches can be done in parallel also.

There’s a difference between that sort of parallelization and the massive parallelization that’s going on in the brain. We’re not talking about doing “lots of stuff” in parallel here; we’re talking about doing millions of things in parallel.

Interesting. Although “isn’t going to work all that well” is quite a different thing than “impossible in principle”. In fact, it doesn’t actually make the case for “won’t work” in the slightest.

Can you point me to any justifications?

[QUOTE=Voyager]
When the Intel 386 was announced, with gasp 16 bit words…QUOTE]

16 bit words had been around at lease since the 8086/8088, they were just hampered by an 8 bit bus. The 80186/80286 had a full 16 bit bus. The 386 uses 32 bit words.

It will take much more than massive parallelism as it is used in computers now. I work for Teradata and that’s how we build seriously big databases. 50 Terabytes is only medium sized. Still baby steps on the way to making a working model of the brain.

Will it? Assuming that there is some time lapse between parallel events, it seems to me that as long as the sequential steps are done within that time frame, there isn’t any principled reason sequential processing couldn’t work. Simulation of parallelism that is so fast as to be indistinguishable from it.

Note that I’m not arguing that sequential processing really is sufficient, just that I’m not convinced that parallelism is necessary. And I’d love to see a counter argument.

Sorry, I seem not to be able to resist hijacking. If someone can supply a counter-argument, please start another thread. No more from me about this…at least, not in this thread.

At what level are millions of things being done in parallel in the brain? Sure, millions of neurons are firing at once, but millions of logic gates inside a processor are firing at once also - not to mention systems with hundreds of processors going at once in an Enterprise server.

As jkramer3 said, you do not need a massively parallel architecture to simulate one. However parallel simulation is done all the time, and it is not that hard to distribute, the major problem being partitioning the thing you are simulating to minimize inter-processor traffic. Not that we have enough computing power to do this today, but it is far more likely that we will in a reasonable amount of time than that we are going to figure out how to implement intelligence in software.

Texas

I personally don’t see the problem as one of hardware so much as algorythyms. I don’t think all those people in the 50 and 70 and 90s were really that far off. A faster computer makes a better AI, but it really doesn’t jump the threshhold from non-AIto AI, which is the problem we are stuck at. If we put it in terms of Bloom’s taxonomy(Is that still used? I’m not an educational theorist, so I have no idea if Bloom is still credible or not since I learned about 15 years ago).

First level,-Knowledge. Computers have this knocked out of the park, that’s what they do they store knowledge as data. Second level-Comprehension, we really only algorymically have the basics down here. Comprehension requires an understanding of what the knowledge means. With business intelligence software improvements over the last couple years, and the on going process to turn ‘data’ into ‘information’ over the last couple decades there is finally what I consider a good step in AI outside of academia. Data mining processes and algorythms are a major step in turning data into information, which I consider a solid building block in computer’s achieving a Bloom’s Comprehension-level intelligence.

Third bloom level is Application. I personally think you can apply this to the Knowledge Management push. Turning information into knowledge. Taking the information gleamed form the company data, and applying the concrete objectivity of the data to the variables of the current situation passes of Application(and a bit into Analysis) in my mind. The big problem is thayt the KM software out there doesn’t do much of this parsing and applying itself, it is simply an assistant in helping a huimal mind keep the stuff organized while the mind does the real applying, so it doesn’t achieve the level very substantially.

And forget about the higher levels of Bloom. Outside of fairly contrived and very specific academic experiments, we havn’t touched the higher intelligence at all in a usefull way.

The bigger MPP systems are a bit of a help, but in my mind a faster computer is like putting on bigger and bigger engines, while our transmission is still a rubber band. We have a lot of work to do on thought-imitating algorythyms, and probably a major breakthrough epiphany or two away from getting a nice clean AI like the Sci-FI books.

Those epiphanies could have come 25 years ago, they may come tommorow, they may come in 25 or 250 years, or never.

So what you’re saying is, we’re about ten years away? :wink:

I may have stated that poorly. I should have said that it will probably take something different than the kind of parallelism we are using now. I don’t think the “parallelism” in brains is coordinated in any way that applies to how we program deterministic computers.

Interesting. I’d never heard of this before. (Yay - fighting ignorance!)

Here’s a couple links:
Cognitive only category of Bloom
Three categories of Bloom

Going by the first link, I’d disagree with you about “none of the higher levels being touched”. (Although, naturally, not enough to claim AI success.) Lots of research fits in “Analysis”, from any machine learning to neural network analysis. Any cognitive architecture worth its salt has some component that satisfies some part of “Systhesis”, such as generalize data (e.g., “chunking” in SOAR). In addition, that type of rule-based system can also supply part of the “Evaluation” by giving a trace of the rules being used.

Included in the second link are “Affective” and “Psychomotor” categories. Fascinating stuff - the AIBO has a rudimentary attempt at affect (first link I found; I’m sure there are others and probably better ones). For more serious stuff, check the U. of Birmingham’s Cognition and Affect project. Psychomotor…well, I’m not all that in the know about that, although I think one of the explicit goals of the ACT-R architecture (linked to in a previous post) was to integrate sensors with cognition (and there’s certainly more to it than that).

Any further recommendations for related topics?

I just started a thread on the parallel vs. sequential debate here, just to avoid further hijacking.