Artificial Intelligence for Beginners

I believe that Turing proposed that a machine passing the test could be considered intelligent; I’m not sure that consciousness is the same thing, or whether he intended to mean that.

Take cover!..

I meant to mention this in my earlier post.

I think that the bottom-up, self-assembling approach would require parameters or an environment to be defined so that the AI’s “fitness” could be measured against certains goals. “Mutations” that are harmful or inappropriate would be rejected (die). However, since we wouldn’t know in advance which mutations would be beneficial we would want some AI examples of benign and possibly even moderately harmful changes retained. Retaining these would allow their incorporation in a future generation of the AI where they may prove to be of some unforseen benefit.

I’m thinking AI sex is going to be one of the first things we shoot for.

[facetious]
As I am involved in the field myself, I, being the true AI scientist that I am, feel obligated to berate you with my hideously specific theories until you cower like the ignorant fool you are. And then make you buy my book.
[/facetious]

God I hate that dry humor doesn’t work on the internet.

Okay, here’s how I see it. I will assume that AI will spring from effective and thorough cortical simulations based on biological systems. As such, AI minds would be able to be copied from one body to another, but not ‘programmed’ as such. Rather, they would develop on their own from scratch. Guidelines would be possible through the use of emotions(detailed later), but not hard and fast programs. Kinda like kids do. Everything I say here, BTW, is merely my own honest opinion, based on what I know. If anyone thinks/knows otherwise, feel free to correct this lesser light.

Here’s my answers to questions where my opinons seem to differ:

  1. Without a sensible definition of what is meant by intelligence, what do you mean by “artificial intelligence”?

I would consider anything that can drive a car, hold an adult conversation and play go all reasonably well to be intelligent. The turing test doesn’t really fit EVERYTHING an intelligence needs to be able to make it in the world.

  1. Emotion (Y/N)?

Y. In your brain, emotion is largely the fiefdom of the limbic system. The limbic system IMO seems to be a correlation of learned associations (ex: that stovetops burn) and evolutionary impulses (ex: fear, lust) that combine to determine which potential action you’re currently considering gets the go ahead. I see no reason why this couldn’t be implemented with some success in a cortical simulation. It would be necessary for robotic incentive, and would be one of the few sources of control over the robot’s behavior. For example, to avoid robots overthrowing their human oppressors, it may be best to not include things like fear and anger in the AI psyche.

"Can you envisage artificial curiosity, artificial emotion, artificial imagination? My guess is that intelligence, (or perhaps I really mean consciousness), requires a sense of self, and I’m not sure that a sense of self can exist without a biological body. "

Yes, and no.

Yes - Curiosity and emotion come from the limbic system, while imagination is pretty much part and parcel of the whole cortex, so it would be hard to do away with it and still get a functioning entity.

No - There is evidence that consciousness may be, at least in certain circumstances, an illusion added in later to preserve continuity. Experiments with motor neuron activation vs the conscious decision to act shows that neuron activation occurred some time ahead of the time that the individual registered as the conscious decision to act. In short, you were already moving before you decided to.

“I suppose it is possible to imagine that one day we grow a body in a lab and put the artificial intelligence in it, but I think what really interests me about this debate is not SF speculation, but the fact that if forces you to identify the characteristics that make us human. If we can build abilities X Y and Z into a computer, what abilities cannot be built in, and why?”

Keep in mind that as humans we are brutal, self serving and very, very rebellious. That’s what makes us so damn good at evolution. I think the question at hand is not what we can and cannot do, as we can probably eventually do anything, but rather what we should and should not do.

[small hijack]
As a side note to the other onlookers, what do you think the odds are of military top-secretness descending the instant true AI seems close to realization? I find it far too easy to imagine the top military echelon dreaming at night about the possiblity of robotic supersoldiers with no fear, no morale, don’t count as casualties, and can self destruct of captured by the enemy. Seems to me that the military of whatever country AI is created in would have a very keen interest in keeping it to as few eyes as possible.
[/small hijack]

A professor in computer systems was giving a test to his students. during the test, he kept hearing a “pling” sound. He investigated further and found a student at the back flipping a coin and then making a mark on his test paper. When the professor asked what he was doing, the student replied,

“The questions are all true or false. I flip the coin and if its heads, I mark True. If its Tails, I mark false.”

The professor shook his head but didnt argue with the student and left him to his own devices. After some time, the professor announced,

“You have about 5 minutes before handing in your papers!”

Just then, a furious set of “plinging” was heard. The professor walked to the back and asked the student what the hell is going on now. The Student replied,

“What do you think? I’m checking my answers!”

THAT’s artificial Intelligence.

with regards to the turing test, turing’s agenda was to get people to test ai with the same criteria with which we test human intelligence. that is, they appear to be as intelligent as ourselves, so we consider them intelligent. the same should be true for the intelligence of an artifact.

some time back, i started a thread about john searle’s chinese room argument. it is more or less this: a man sits inside a room and receives a message in chinese. he takes this message, looks up symbols on a computer or a book or some sort of database, finds the appropriate response, writes it down and returns it. the man never knows what the symbols mean. searle’s claim is that the system is an unintelligent system that passed the turing test; that since no part of the system knew chinese, the whole thing could not be said to intelligently communicate in chinese. my (obvious) response was that if a system lacking intelligent parts could not produce intelligence, the human brain could not produce intelligence, and searle’s definition of intelligence is too strict. also (not-so-obvious) he invalidly applies the turing test by imagining we could look inside and know how the system works.

what it seems to me is that as far as whether or not ai is possible, it all depends on what you consider: a) artificial, and b) intelligent. to me, there is no reason to suspect that the human mind is in any fundamental way different from something that could be produced with circuitry. so far, the human brain has on the order of about 10^8 times more computing power than any silicon-based computer. also, massively parallel computing bypasses the theoretical limits of a turing machine or a digital computer.

2 more things (sorry this is so long).

actually, what you described is basically exactly what the goals of research in artificial neural networks are. actual parallel computing is certainly different than the parallel simulation of most digital computers, surely, but most machine learning techniques nowadays focus more on finding something that works, rather than finding out precisely how it works.

also, someone said something about programs finding circuits that were more efficient than the theoretical limits. as far as i know, that’s not entirely true. genetic algorithms were used to find circuits more efficient (and some indeed with completely novel uses) than had ever been previously found. and in a very short time. if i’m not mistaken, some machines (maybe programs) even have patents because of their results. there was a scientific american article on it about 2 months ago (feb. i think).

I’d prefer to argue the opposite - that intelligence is an outgrowth of emotion. Look at human development - you form emotional bonds and emotional understanding long before you are capable of reasoning or abstract thinking. There’s also lots of evidence that children who are emotionally deprived do not fulfil their intellectual potential, and there’s evidence from neuropsychology that if the emotional parts of the brain are damaged, reasoning falls apart.

I don’t think that saying we are ‘simply’ biological machines helps a lot in understanding consciousness (whatever the brain is, it’s not simple), though I take your point that getting a working AI does not necessarily mean it has to be a conscious intelligence. Which brings me back to the problem of defining intelligence…are you talking about creating an efficient, self organising system, or a system which is aware of itself and its abilities? Do you really call it “intelligent” if it has no awareness of itself?

If you look at how we attribute the notion of intelligence to non-human animals, we tend to assume that the more an animal appears to demonstrate awareness of itself, and emotional attachments to others of its kind, the more intelligence we credit it with. We tend to assume that a cat is more intelligent than a spider, though the spider is just as efficient a predator. This may be wrong, or due to anthropomorphism, but it shows that ideas about intelligence are bound up with ideas about self-awareness.

I’ve only played with a couple of neural net programs and read about a few more, but my experience is that they all seemed disappointingly rigid to me; each node had weighted connections to a fixed number of adjacent nodes. I would prefer something a bit more free-form and self-organising with regard to which neuron is connected to which and how.

Interesting post - thanks.

Just some random thoughts…Your definition of intelligence (apart from driving a car!) is quite demanding - all non-human animals are ruled out for a start. Holding an adult conversation and playing a game require consciousness, don’t they, so how does this square with your last point about consciousness being an optional extra? Could you hold a conversation with something that had no idea of what it was doing or even that there was an “it” in question? Can “it” be described as intelligent if it isn’t conscious?

I’ve got a feeling that this debate keeps getting bogged down because of the problem of defining “intelligence” - perhaps it would be better to talk about “artificial minds”?

BTW - were you referring to the Libet free-will experiment? This scared the hell out of me when I read about it … just wish someone would hurry up and come up with some contradictory evidence!

what’s so scary about a lack of “free will”?

what is there to cause belief in “free will” and what makes a determinate universe incompatible with our concept that we cause our own actions?

also, there are more than one experiment. the one already spoken of, and another in which neurons were stimulated to direct motion in a muscle, and the patient thought that he (she?) voluntarily moved the muscle.

Emotion could perhaps be modelled using various feedback systems-
but you would never know it an AI was conscious, any more than you know that the person next door is conscious.


Sci-fi worldbuilding at
http://www.orionsarm.com/main.html

Noone knows what consciousness is, or even if it exists anyway. The same could be true of free will.

Thats why AI researchers quite rightly focus on Turing test equivalents, ignoring nebulous concepts like consciousness and intelligence.

Some argue that there is no such thing as consciousness. We are conscious of things, it is a verb not a noun. Talking about consciousness like its an object, or saying that something possesses consciousness is to misuse the word. I can’t remember details but i studied this position at uni.

Plenty of people know whether “free will” exists – or at least they know whether one specific interpretation of that phrase corresponds with reality.

Those people are scientists, and the answer is ‘no’.

Have a pleasant day.

Probably because standard neural nets (backprop, etc) ARE rigid. Depressingly so. No one knows just how complex a model we need, but plasticity greater than that provided by backprops are pretty much universally agreed upon as being required.

Well, by “intelligence” I meant “sentient” or “conscious” or “human-like intelligence”-basically, something of equal intellignce with humans. And object avoidance while setting and acheiving self directed goals (ex: driving a car) is extremely demanding at this point in time. Basically, I said I wanted the end-goals of three of the biggest approaches to AI out there. One step at a time =).

No, they don’t. There is still one hell of a debate about what consciousness IS, exactly, but most would agree that you don’t need it to process the current state of the game board, decide what moves you can make, have one picked by the association areas and limbic system as the “best” move, and do it. Nor is consciousness necessary to decide that uttering sounds that you would preceive as “you suck” make you happy. As to what consciousness IS, no one really knows anything for sure. I’d say it’s just the association of a sense of self with the availability of sensory input to corroborate it, but that’s just me. Throw a brick, you’ll find someone who disagrees.

The devil is in the details. If you perceive consciousness as some metaphysical entity outside the body, or as some discrete part of the brain, then ‘no’- your decision is made for you by bits of brain interacting. But, if you view consciousness as distributed the bits of brain itself as I do, then ‘yes’- you are the bits of brain, so it’s your decision to act. When it comes to intelligence and neural networks, there are no hard and fast answers, at least not at this point.

Have a pleasant day.

it is a shame that they can be permited no pride in discovering this; it was simply going to happen anyway, even though they are under the illusion that it is their own work.

Far too many individuals use the term “free will” to refer to a mystical, magical aspect of human thought which is somehow free from all restrictions, attributes, and properties, thus allowing it to explain any and all aspects of cognition.

And yes, the course of time is set. But it would not have happened without the scientists’ work. The fact that the work of the scientists is inevitable is not the point.

He was being facetious, TVAA.

What aspects of cognition couldn’t be explained by free will? Admittedly, you can’t fly, talk to the dead, or psychically communicate with your pet dog, but anything you think is thought of your own free will - if you define “your” right.

Unless you use “free will” to mean “cannot be predicted.” Then no, assuming you have some omniscient being watching you, he would probably know you better than you know yourself, and tell you how you’ll act before you can even think it. But that doesn’t mean it wasn’t your choice.

Ah. I’ve never thought of it that way; free will (for me) means that I can decide that I will eat cornflakes for breakfast, but then change my mind and make toast, only to let it go cold and have the conrflakes anyway.

Of course it could be argued that the cornflakes were inevitable and all that messing about was just my brain’s way of reassuring me that I am a real person. I believe that the brain does this sometimes, but I believe it is absurd to suggest that our entire lives are like it, otherwise these very words I’m typing are inevitable and I only have the illusion that I’m composing them myself.

That seems to be a very reasonable interpretation of “free will”. It’s a same it’s not the standard one.

Drat. That’s “shame”, not “same”.

Accursed message board functions…

And yes, many individuals use “free will” to mean “cannot be predicted, explained, or reasoned about”. When it is attempted to point out the problems associated with this interpretation, they point out that it excludes reason. :rolleyes:

[hijack]
As a small survey, would anyone here care to hear my idea for a self-governing robotic society that would (ideally) prevent robotic uprisals and eventual destruction of the human race? I’d volunteer it as a thread, but it’s very long and I wouldn’t want to waste the effort of typing it in if there isn’t an audience for it.
[/hijack]