What is the state of AI?

That is interesting. I’ve always viewed synthesis and evaluation being keyed mostly toward creating entirely new subjects, or juding scales out of clues from previous existing subjects. I’m going to have to do some research on what the ‘generalize from given facts’ means in that context, as apposed to Comprehension level summations. Transforming data doesn’t hit sythesizing in my mind, but thinking about it we probably are closer than I said at first.

Evaluation is a complex one. Personally, I would put rules-based systems more in the Analysis level ‘compare’ task, but sequentially implemented. To me evaluation implies self-determining an apropriate judgement criterion that has not been previously used, then applying it against the element to be judged. I’m not aware of any computer system near that level, but I might be ignorant of one.

Well as far as I know there isn’t any generalized usage of Bloom in AI. I just used it as way to equate machine learning, to what we understand about learning in general, but there are other thoeries of learning. Piaget was another big one when I learned this stuff, but as I said it was 15 years ago so it might be discredited by now.

And I am a bit of a bigot when it comes to computers under research and experimentation. I think a lot of big ideas are spread to get more funding, when the computer is a very contrived system to look flashy and get more money. When a computer hits the real world, and does a funtional job, only then do I consider ‘us’ at that level of technology.

Jeff Hawkins has a great new book out called On Interlligence. I have only just started it and likely won’t finish it for sometime, but the first few chapters are facinating. I think he would agree with jkramer3 that Kurzweil is off track.

From the inside front jacket:

If you are interested, I recommend you try it.

Here’s an interesting website on Knowledge Engineering:

http://www.netnam.vn/unescocourse/knowlegde/know_frm.htm

Just popping in to say that I haven’t abandoned the thread. But it’s a busy evening, so I don’t have time to explore all of the links. I do want to look at AIBO, since when I think of AI I doin’t necessarily think of human intelligence. It seems to me that there are robots that have the “intelligence” of an ant. Does that mean that ants are merely clockwork toys? Or does it mean something significant about computers and robotics? I don’t know.

Anyway, I’ll check back later…

From a couple years back, I remember the AIBO had three measures that simulated affect. Oh, drat…hold on a minute…

Evidently, they’ve gotten more sophisticated. According to this, there are 10 different types of “emotion”. I’d expect it’s pretty interesting to play with…

About the cockroach robots - basically, I think what you’re talking about can be summed up under the heading “behaviour-based robotics”. The seminal work is a 1986 paper by Rodney Brooks, titled “A robust layered control system for a mobile robot” (sorry, I couldn’t find a link to the paper itself), which introduced the subsumption architecture. Before that (I think…may have been concurrent) was Braitenburg’s Vehicles. Plenty of others; I don’t remember who it was that developed a robot that navigated like a rat (topological, as opposed to spatial, navigation), for instance. Google on “robot ethology” for lots more fun stuff.

These simple robots exhibited astounding capabilities and shook up the AI world. (Perhaps I’m overstating, but that’s my perception of the situation.) However, the trick is to imbue them with deliberative capacities - no mean feat, that.

If you happen to be looking to get lost for hours reading research papers, might I suggest citeseer? A tremendous resource for computer-related research. Just type in some keywords and go…

Coming from a psychology perspective, I think it really is a matter of programming and engineering.

Humans are self programming (implicit learning) for a great many tasks. We learn language not by being taught language, but by being immersed in it. We learn various motor skills or social skills only by viewing it, learning it and then attempting it ourselves.

Humans have self knowledge. We can build off of existing knowledge and then extend one knowledge to be integrated with other knowledge. I learn about all about history. Then I learn all about sociology and economics. I can integrate all of that into “The Rise and Fall of the Roman Empire, Vol. 1”.

Humans have not only parallel processing for a few tasks, but tons of tasks. We breathe, blink, swallow, digest, have heartbeats and circulation without even thinking about it. And we can do stuff with our hands, feets, mouths and face at the same time for different tasks. See “Singing Drummers” for examples. All the while the Singing Drummer might be thinking: Crap I hope I have my 60’s retro montage alimony paid off now!

I really don’t think AI is anywhere near 10 years. I give AI nearly 200 years at the least without serious breakthroughs in brain research and revolutions in circuit engineering. I think the first AIs will be scale replicas of the brain made out of chips and circuits. Also, I don’t think we have the programming skills to make the simplest “sentient” program. Without figuring out how to make “metal synapses and brain cells” and “self knowing, self learning, self actualized” programs, we are far off.

Of course, that is IMHO from a BA in psychology perspective. I would love to be proven wrong.

Ada Augusta and Charles Baggage were thinking about AI in 1840 (concluding it would never be possible) and their mechanical analytical machine was no different, in principle, from modern “super” computers – come to think of it the first computers in Denmark in the 1950’s were naively called artificial brains.

First you have to define what you mean by AI. If the capability to solve complicated mathematical equations very fast will do, a simple chess computer seems to fit the description – certainly any decent chess program can beat 99.9% of the human population any day of the week. However if what you’re really after is Artificial Sentience I personally have come to agree with Ada Augusta.

Can you tell me why they came to that conclusion? Just curious - I’ve never seen any discussion of Babbage’s views on AI (or whatever it would’ve been called back then).

Most laymen don’t appreciate how much parallel processing of this type (which used to be in mainframes only) is in your standard PC. Writing to a disk, printing, reading from a CD, all involve the main CPU sending requests to processors on the peripherals that do most of the work. Consider how much smarter your printer is now than it was five years ago. I guess it isn’t surprising that this sort of distributed processing architecture arises naturally.

Interesting. I think an underlying issue is that there are two roads that people take: software, implementing algorithms on a general purpose computer, and hardware, mimicking the brain in some way. Most standard AI is of the first class, neural nets are of the second. Simulating hardware still falls into the second class. I don’t think anyone can or should build hardware until we get a good simulation model working - I assure you, it is much easier. The mistake I see a lot is assuming that more raw computing power helps on the software side.

On the software side, I don’t think our programming skills are the problem, but rather we don’t know what program we should be writing. Software with incomplete specs never works. It is easy for a theorist to wave his or her hands, but when you are busy writing code, you need to know exactly how to implement every detail. If it isn’t in the spec you guess, and most of the time you guess wrong.

I am pretty certain that we will have the computing power of the brain available in 20 years - but that does not mean we’ll have the software to go with it. That could be 200 years for all I know. I think that with better non-invasive probing techniques, and this computing power, that we should be able to simulate at least a mouse by then, if not a human.

Sorry, ignorance checking in here, but if I may ask my impertanant question…

What niche in the world is artificial intelligance going to fill? What is the purpose behind it at all?

Something that came up in the thread I started over in Great Debates that might be of interest to those reading this thread - the GRACE project.

I have no affiliation with the project other than recognizing it as a current “standard bar” in robotics/AI.

This is not an attempt to be snarky, but…exactly. In explanation, I offer you the sig line I currently use for my email:

Research does not always have obvious immediate applications. For some people, creating a sentient AI is like going to the moon or climbing Mount Everest. They want to do it just to show it can be done.

Others see applications in AI-related techniques; I suppose this would be more machine learning than AI. There are certain problems that humans cannot solve efficiently because of the complexity or sheer volume of information involved. The goal is to use automation to help.

I’m currently a CS/AI undergraduate.

AI already has a multitude of purposes, from expert systems to theorem provers to potentially spotting cancerous cells in amongst millions of healthy cells.

Really, all the AI I’ve been exposed to in my course has been nothing more than search problems. We have seen some Hopfield and Neural nets and took a glimpse at the philosophy behind it all (it was pretty disheartening to find one of our first lectures discussing why AI is potentially not possible).

From Ada Augusta’s notes (so called – it was highly unusual at that time that women wrote scientific material intended for the general public (i.e. men) – the notes were also merely signed A.A.L.) to a paper by the Italian Menabrea – the notes were actually larger and much more interesting that the actual paper. There’s some discussion what was written by Ada Augusta and how much was actually written by Charles Baggage, though I tend to hold the view Ada Augusta was the main author especially of the less technical aspects. It has been some years since I read of this, looking through the books I found this:
Ada Augusta’s note ’G’ (which also includes the world’s first published computer program)

[I chaned the original italics to hideous bolds since the quote forced the whole in italics] In which I think she comes out squarely against at least the Analytical Machine ever archiving own intelligence (though there is no consensus on the precise meaning of this part of the notes). Also in note ‘A’ she warns the readers of having false expectations about what the Analytical Machine could ever become. I thought I had seen a more forceful denial of Artificial Intelligence by her – perhaps in one of her letters, but can’t now find it. (though it’s interesting of a sort to note that Mary Shelley author of Frankenstein started the book as a wager during a stormy (literally) night in an Italian castle with Lord Byron – father of Ada Augusta.)

Why she or they came to the conclusion? First the notes served several purposes, one was as sales material to the British government which already had invested considerable sums in Baggage’s adventures and which he wanted to convince to invest further. Ada Augusta didn’t want Baggage to come off as a scientific charlatan and dreamer by coming with preposterous claims. Secondly I should say the mere idea of a collection of winches, balls, wooden cards and what have you not becoming sentient and (self) intelligent is by itself preposterous on par with a bicycle becoming sentient. Perhaps the ready willingness to accept that computers can become sentient stems from the fact that their inner workings are more hidden away and complicated, though I’ll claim again that there’s noting different, in principle, from a mechanical computer and an electric computer – or a Von Neumann’s self replicating cellular automata on a paper block for that sake.

(I named one of my daughters Ada Augusta – what kind of nerd does that make me?!)

“Of what use is a newborn baby?”

Thank you for the information.

Ah…if only more AI researchers would show the same restraint.

Agreed. Nothing different in principle. Although I feel the need to state that I don’t see anything anything different in principle from biological mechanisms either. Mechanical, chemical, electrical…all of a piece. IMHO, anyway. Of course, I’m a firm believer that sentience is Turing computable (or, can at least be simulated as such). But then, I also think that functionalism yields the only possible objective test for sentience, as lacking as that may be.

<christopher robin>Silly Rune. That makes you the best kind.</christopher robin>

This was really more my thought, as my field of study was philosophy. I fail to see where the two disciplines would mesh in agreement that the ideas behind AI are even remotely feasible. Certainly you would have a difficult time creating something non-human that could be considered cognizant or “self aware”. We by and large don’t even grant these qualities to animals, let alone to conceive of granting these qualities to a machine. Artificial Intelligence seems at best a misnomer, a poor application of term. To agree that a machine possesses any “intelligence” over and above that which a sentient being programmed into it’s operations would then grant too many other inferences about “life” as we know it.

We humans don’t possess any more intelligence than what exists in our hardware. What’ the difference between us and a machine with similar programming?

We may not have any more intelligence than our hardware, but no one knows exactly how our hardware goes from electro-biological impulses to “Mozart’s 5th Symphony of Compton” or Einstein’s Theory of Real Estate. A computer, no matter how clever it’s programming, does not understand a simple concept we humans know very well, “I know this and can manipulate it.” The most clever computer we have does not actually know it knows anything.

Secondly, a smaller biological computer, say Dog, still has a certain amount of self-awareness that even the most complex computer does not have. Also, that Dog can learn new things without the need of teaching or programming. A Pup just looks at Momma Dog and after watching long enough, self-programs to hunting behavior. Taking a larger biological computer, Human, is way beyond said Dog. Now the Human computer self programs: language, social behavior, motor behavior, thinking behavior, self awareness behavior, etc. Minimal guidance during any given period can drastically change or not even remotely change the programming depending on the brain of said Human.

That’s where I think the problem lies. No actual awareness + No implicit learning = No AI.