Artificial Intelligence for Beginners

I read the thread about whether a sentient computer would develop a belief in God with some interest, but I confess I found some of the contributions hard to follow because they assume a fair bit of knowledge of the debate.

I was wondering if we could go back a bit, for the benefit of people like myself (still trying to work out how the human mind works and extremely hazy about computers) to establish where this debate has got to.

Some of the problems I have with the idea of an intelligent computer are these:

  1. Are you talking about creating a computer with a human-type intelligence, or a computer that is intelligent, but not necessarily in a human way? Even if it was intelligent in a non-human way, it would have to know it was intelligent, wouldn’t it, otherwise it would merely be efficient (like current computers!).

  2. Without a sensible definition of what is meant by intelligence, what do you mean by “artificial intelligence”?

  3. Lets imagine that you could “program in” all the major intellectual elements of the human mind - memory, language-ability, reasoning power, working memory, etc, etc … where would the computer get the motivation to use these abilities? Why am I sitting here typing this, rather than learning Japanese or cleaning the toilet? Because, for complex current and historical reasons, I want to. So, how would you get a computer to want to do A rather than B? Intelligence seems to need some sort of emotional force before it can get off the ground - I can’t see how an artificial mind would work at all without first having a sense of priorities, however simple, and “a sense of priorities” implies emotion.

Can you envisage artificial curiosity, artificial emotion, artificial imagination? My guess is that intelligence, (or perhaps I really mean consciousness), requires a sense of self, and I’m not sure that a sense of self can exist without a biological body.

I suppose it is possible to imagine that one day we grow a body in a lab and put the artificial intelligence in it, but I think what really interests me about this debate is not SF speculation, but the fact that if forces you to identify the characteristics that make us human. If we can build abilities X Y and Z into a computer, what abilities cannot be built in, and why?

Any thoughts will be read with interest…

“1. Are you talking about creating a computer with a human-type intelligence, or a computer that is intelligent, but not necessarily in a human way?”

Yes.

"2. Without a sensible definition of what is meant by intelligence, what do you mean by “artificial intelligence”? "

That’s a very good question. The Turing Test is the best of all currently-proposed standards IMO: it doesn’t require us to offer a definition other than to note that we agree the average human being is intelligent.

(Well, most of you would concede that point…)

"3. Lets imagine that you could “program in” all the major intellectual elements of the human mind - memory, language-ability, reasoning power, working memory, etc, etc … where would the computer get the motivation to use these abilities? Why am I sitting here typing this, rather than learning Japanese or cleaning the toilet? Because, for complex current and historical reasons, I want to. So, how would you get a computer to want to do A rather than B? Intelligence seems to need some sort of emotional force before it can get off the ground - I can’t see how an artificial mind would work at all without first having a sense of priorities, however simple, and “a sense of priorities” implies emotion. "

Yes, the AI will necessarily have priorities and desires. So?

"Can you envisage artificial curiosity, artificial emotion, artificial imagination? My guess is that intelligence, (or perhaps I really mean consciousness), requires a sense of self, and I’m not sure that a sense of self can exist without a biological body. "

Why? It’s just a very old and successful form of nanotech. What’s so special about biology?

“I suppose it is possible to imagine that one day we grow a body in a lab and put the artificial intelligence in it, but I think what really interests me about this debate is not SF speculation, but the fact that if forces you to identify the characteristics that make us human. If we can build abilities X Y and Z into a computer, what abilities cannot be built in, and why?”

The ability to make a truly edible Pot Noodle. No one can manage that.

So I’m questioning whether it is possible to program-in emotion. I suppose I’m talking about emotion the AI is aware of, not unconscious drives. Computers update their memories, etc, but they don’t reflect on this, do they? They don’t say to themselves, “oh, that’s an interesting memory, I’ll have another look at that, it reminds me of X”. An artificial intelligence could have a memory-bank a million times bigger than a brain’s, but if it didn’t know it had them, it would still be a million miles away from what I would call intelligence. How would you create spontaneous curiosity, the drive to explore your own mind?
Re Pot Noodles - you are saying that there is nothing in the human brain that could not be reproduced artificially. This is a massive claim and I’m dubious, but ignorant…Er, how far along the road have researchers got here?

The truth is, we know very little about how a future AI would turn out. Personally, I think we tend to anthromophise AI’s far too much. Practically all of the AI’s I have seen in literature either fall into good, bad or human.

Another interesting follow up question: If an artificial intelligence starts believing in god, will it go to heaven after it ceases to function?

Yes, it is likely that an artificial intelligence will be able to experience emotions and conciousness…
a really advanced one is likely to be able to adjust these emotional responses itself, and even to experience several sets of experiences simultaneously or experience a single event from several (artificial) emotional viewpoints then integrate them afterwards…

some of these emotional experiences will be ones modelled on human or animal responses, and others might be totally new.
Ok, this is just speculation until true AI are actually available,

but it seems wrong to think that conciousness will not exist in a computer that appears to exhibit the characteristics of a concious entity.
This event (to the Turing Test and Beyond!) may well happen in our lifetimes.


Sci-fi worldbuilding at
http://www.orionsarm.com/main.html

What is artificial? Is there some sort of dividing line between natural and manufactured?

My point is, there is really no distinction between what is natural and what is artificial, other than that we created the artificial one ourselves. There’s no reason to suspect that there is anything that exists in nature that we would be unable to duplicate (given time and the proper resources). Of course, creating human-like AI would require us to understand our own brains better than we currently do. But I see no reason to think that there is any brain functionality that cannot be reproduced artificially. It’s not like biology is some sort of mysterious black magic, after all.

I expect people are getting a little tired of me saying this, but I firmyl believe that human attributes are too fluid and complex to ‘program in’ (as a series of If/Then statements, for example).
I think the most promising direction for AI lies in self-organising systems - like the human brain - rather than trying to create curiosity, emotion, intellect etc. as macroscopic simulations, it should be possible to simulate something like a neuron (complete with all of the behaviours that it might need), then pile a whole bunch of them together and let them organise themselvres into a structure in which a mind can develop (as I believe is the case with biological intelligence). This goes a bit beyond neural nets.

Anyway, the upshot of creating such an AI (if successful) is that *we would have at least as much trouble understanding how it works as we do understanding ourselves; its mind would not be a rigidly planned thing, but rather an emergent artifact of the system in which it resides. The AI would not have a clue how its mind actually works, only that it does (just like us).

Research continues however and maybe the top-down approach will be possible at some point, given that we may better understand what is really behind mental processes in the future.
I still believe that the self-organising system has a better chance of producing a ‘real’ mind, rather than something which merely has every outward appearance of one (although there will be no way for us to determine this).

Good OP mrsface.

My gut feel is that the bottom-up approach that Mangetout suggests is the most feasible in anything approaching a reasonable timeframe. I believe that despite ever more powerful processors, the limitation to a designed AI is the “software”. How do we design something which we understand so poorly?

There have been some interesting experiments with molecular computing and self-optimizing electronic circuits. IIRC, some of those electronic circuits have come up with circuit designs which are novel and / or better than the theoretical optimal solution.

Mangetout is a bottoms-up, not a top-down, advocate of AI.

(For those who don’t have an advanced degree in Cognitive Psychology and Hyperbolic Topology, [ngghoi], the bottoms-up approach refers to the idea that complex cogntive processes would be best generated by allowing them to emerge from more basic elements, while the top-down approach believes in reverse-engineering human cognitive processes and writing programs to simulate them.)

I suppose another question would be: is there any difference between a ‘real’ mind and a system that has every outward appearance of being a ‘real’ mind? - at first glance the Turing test seems to answer ‘no’, but the Turing test is (IIRC) not supposed to be a measure of whether an entity really is sentient, only whether it appears to be.

(By a ‘real’ mind, I suppose I mean one that has an genuine inner sense of identity, like (I assume) humans do)

But what do you mean by “really” sentient?

And do you actually go around verifying that specific human beings are sentient according to whatever standard you have?

May I reply with a question:

If human knowledge one day let’s us grow a human brain in a jar, would it be acceptable to say that the brain-in-jar was a form of computer?

Sure, why not?

Ooh, maybe we’ll figure out how to grow human souls in a jar. That’d be even more convenient.

What I mean is possessing an inner life qualitatively similar to that which I enjoy; an inner sense of self and identity; the ability to fully grasp the term ‘me’ in a personal (not abstract) way.

As far as I will ever be able to empirically ascertain, I am the only entity in the universe that has this attribute, but I believe it is not an unreasonable deduction that other members of my species may share it.

Maybe we’ll get that as a bonus with the brain, hmm? :slight_smile:

(above re: TVAA)

The bottom-up approach sounds more promising to me, because it’s based on how intelligent biological systems are actually organised eg neurons, and doesn’t depend on all sorts of highly debatable assumptions about human psychology which then have to be schematised and fed into the AI (lots of room for major errors here). Also it seems to offer a way round the problem of creating ‘minds’ based on rules and linear thinking, which is not much like flexible intelligent thinking as we know it.

On the other hand you would have to have some idea of what the artificial intelligence was supposed to look like beforehand or you might end up with something like a wasp’s nest (an efficient biological system in whose purpose emerges from many tiny parts, but not conscious as far as I know). It would probably make sense to take both approaches and see if they met in the middle…

Going back to the Turing test - if I remember rightly Turing said that if you are talking to a computer program and cannot tell the difference between it and a person then you’ve done it - created a conscious AI. This seems reasonable, but is it the only test around? It’s a very tough test because it is highly dependent on sophisticated language ability (also clever programs can fool people at least for a while, eg that ‘therapist’ program which answers every statement with a question). What about a computer that showed some capacity for independent action - getting bored and playing with itself (!)?. It wouldn’t need to cope with the social subtleties of language, but it would be showing that it was aware of its own mind - ie conscious. But it is this capacity - call it boredom, spontaeity, independence, imagination, that I think will be the most difficult to reproduce whichever route you choose.

This also begs the question of how people become aware of their own minds - many people would say this happens via other minds. Perhaps the AI would have to be given experiences similar to a child’s?

For a list of links with respect to the creation of AI, here are the ones we use at OA
http://www.orionsarm.com/whitepapers/links.html#ai

Both the bottom up [emergent] and the top down [emulation] approaches will be important, IMO…
But once a high speed intelligence is given the opportunity to redesign itself, anything could happen - which is a disquieting thought.
There is no guarantee that these entities will be, or remain, human-friendly or comprehensible.


Sci-fi worldbuilding at
http://www.orionsarm.com/main.html

Why does AI need emotion? I see nothing to suggest that emotion is a fundamental quality of intelligence. If anything it is an outgrowth of intelligence.

Understand that getting a working AI is just an engineering/complexity issue. There is no reason to think it can’t be done someday unless you believe life is divinely granted only. Essentially you, me and every other living thing on this planet are simply biological machines. Emulating that in a computer should be at least theoretically possible.

The questions of fundamental rights for the machines (ala human rights) then becomes interesting as do theological questions.