Moral implications of AI

If man could create a computer that thinks – really thinks – and is self-aware, would that be a good thing? Would a sentient computer still be just a machine, or would it become a “being”? My personal belief is that if a computer were self-aware, the fact that it was a consciousness stuffed into a box would make it no less a being than any other intelligent and self-aware creature.

What moral obligations, if any, would the creators of such a “being” have to their creations? What moral authority over them?

A servo-mechanism which merely responds to electronic stimuli and performs programmed actions is useful, but not “owed” anything at all. Maintenance is done not for the machine, but to make the machine able to do it’s owner’s work.

But a sentient machine might choose to do or not to do it’s maker’s bidding. Would maintenance be a moral obligation of the maker? Would pulling the plug become a type of murder?

Many people think a god created man with the capacity to reason, and that man’s use of that capacity caused the maker no end of trouble. Would man’s creating a mechanical “mind” be equally troubling?

Your thoughts?

There are so many unlikely to impossible assumptions in this post, it’s hard to answer.

“The question of whether or not a computer can think is no more or less interesting than whether or not a submarine can swim.”

What does that really mean?

I think you’ve been reading too much SF and watching too many Terminator movies. How would a machine become sentient (able to make its own decisions)?

A much more interesting, and current, question might be what parents owe to their cloned “offspring.” Is it moral to create a clone of yourself so you’ll have a spare kidney when you need one?

Ooh, I love AI threads.

Firstly (and pre-emptively) when we talk about Artificial Intelligence, we may not necessarily be talking about a machine that was deigned in every way to emulate certain outward human processes, we might very easily be talking about a device that was constructed in such a way as to permit the emergence of a mind (in much the same way as the human brain is a place in which a mind can form) - in this case, the creator of the device would not be able to fully predict or explain the behaviour of the device - it would be acting under its own volition (or appearing to).

The question will always arise as to whether the machine has any ‘inner life’ like we do, or whether it is just a very clever automaton, outwardly simulating the actions of an intelligent being - the question is technically unanswerable, however, personally, I would be entirely happy to assume that a non-programmed AI did have true consciousness if it told me so itself.

Now, as to rights, again, this is an interesting quandary and I suggest that the situation (in regard of an AI that we have already accepted is a properly conscious being) is somewhat analogous to that of a human who is entirely dependent on life-support machinery.

When you ignore the hypothetical nature of the question and respond with a jibe I confess it is difficult for me to answer you without sarcasm or some kind of put-down, but I shall try to be civil.

In view of the fact that people are building ever more complex computers, with chips projected in the not distant future to be molecular in size, and since the term “intelligence” is often associated with those efforts, some think it is possible that someday a computer will be made that can actually think in the same manner that humans do. My hypothetical question was whether, if it is possible, it is also desirable.

If you believe that human brains have people in them, rather than that people have brains, you may have difficulty suspending disbelief long enough to handle the question.

Again, just to clarify: If artificial intelligence is possible, is it desirable? And if such an artificial being could be produced, would its maker have any moral responsibility to it, or authority over it?

That is an interesting question. Why don’t you pose it in your own thread? I promise I won’t ridicule you as you have me.

How would a machine become sentient (able to make its own decisions)?

Good question, and it hasn’t happened yet;
does that mean it will never happen?
The fact that your organic , evolved brain can do it tends to imply that it is possible, and that such a process can be copied.

It will probably occur in the next century or so, and perhaps not quite in the way hat people expect;

The efforts to produce a self aware, self directing thinking machine will almost certainly produce some very strange half thinking, half aware entities before too many decades have passed.
These machines will have the advantage of total memory recall and access to vast amounts of data;
this is not the way a human mind is created.
It may be that these entities will surpass human abilities in many respects before the full emulation of human type self awareness is possible.
The machines may decide not to bother and to continue their own self directed evolution without becoming conscious as we are;

I am a little worried that such a-human intelligences could easily decide that humans are irrelevant, which is why we should concentrate on the development of so-called Friendly AI
http://singinst.org/CFAI.html
‘The Analysis and Design of Benevolent Goal Architectures’

Sould we even attempt such a thing?
I believe so; it will probably be achieved at some point in the future; there is no hurry- let’s take our time and get it right.


SF worldbuilding at
http://www.orionsarm.com/main.html

Here’s a similar thread if anyone 's interested:
http://boards.straightdope.com/sdmb/showthread.php?s=&threadid=212893

Thanks, Mangetout, for a serious response. And you add an interesting thought. Life support. Hmmm. If the plug is pulled on a human who is dependent on machinery to live, it is usually after brain activity has ceased. At such times the comparison with a vegetable is often made. But if a person’s body requires life support but his brain is functioning perfectly otherwise, don’t we usually feel morally obligated to maintain the support, even if the person is an irascible and otherwise disagreeable sort? Would we have the same responsibility to a super machine that thinks (or at least thinks it thinks)?

With a machine, the immoral act would not be turning the machine off, as it could be restarted and continue as if nothing had happened;
(if these machines are anything like present day computers)

the immoral act would be destroying the data that that mind might contain.

Switching off a thinking machine is ok if you allow for that machine to be switched on again at a future date.

Although the machine itself might feel inconvenienced in some way and have a valid greivance…

You started off well, but I don’t think there’s any reason to assume that a mind arising as an emergent artifact of a self-organising system need be at all efficient in the way it accesses its own memories, or in the way that it thinks - if we create the electronic analogue of a brain, pretty much anything could happen in there - it would be more like giving birth to a child than it would be like designing a mind.

Indeed the mind that arises may, as a result of its environment and sensory equipment, fall prey to all kinds of bizarre notions and fancies - it might genuinely be absent-minded, morose, cheeky or any number of things.

Well, the analogy isn’t complete, naturally; a human on life-support (say an iron lung) has, in most case, already established their identity and rights as a person as independent of the machine, whereas our AI is and has always been entirely dependent on the machinery.

I don’t think there’s necessarily any immutable logic on the matter though; our status and rights as humans have not always been the same throughout history and may change again in the future; if popular opinion is such that AIs can be treated as convenient, but inferior, (possibly)disposable commodities, then that is what will happen (just as it did when human slaves were treated thus).

This may or may not be the case, depending on the hardware - if molecular processes are used in the computation, quantum effects may very well come into play and it may be impossible to record the inner state of some of the components without actually altering it and ‘killing’ the patient.

For starters, I submit that, to date, no machine that we would consider sentient has been produced. Thus the difficulty in discussing the subject.
It’s just too theoretical at this point.

So, I would like to bring in an analogy that might help address the OP’s questions:
Let’s imagine - not that a “Thinking Machine” has been created, but, rather, that through genetic engineering and through training, we have managed to create a strain of Chimpanzees or Dolphins that may be conidered sentient.

Do we owe this creature and its offspring the same treatment as we offer to members of H. Sap? If not, what are the differences and why?

Perhaps some posters may feel that we owe animals that are near-sentient (what is this?) the same treatment as we offer to fully sentient creatures (only accepted member to date - H. Sap)? Again - how and why?

Is there any difference between a (hypothetical) sentient machine and a (hypothetical) sentient “super-chimp”?

I don’t even want to get started on the more general “animal rights” question - and it is really only a very short way from here.

Not that I have any answers, mind you. But maybe looking at the situation from this angle will at least help to define the question to some degree…

We are machines.

We depend solely on “hardware”.

We suffer the twin illusions of consciousness and free will.

Once we really get down to it, our only differences to things we find “non-sentient” are the flexibility-of-arrangement of our particular type of neurons and the electrical characteristics of the synapses therein. And so, if we can mimic the operation of these stringy, organic laces then we our on the way to “sentience” (whatever that is) which, as far as I can see, merely requires a sensory input connected to a memory to explain pretty much all of its illusional qualities.

The question is: How d’you know I’m sentient?

Thanks. I appreciate that. I was not meaning to be uncivil, but obviously failed. Sometimes my sentience is not as clever as it thinks it is.

As ** SentientMeat** says, how will we know when a machine is sentient?

We do have a test for self-awareness:

  1. knock out testee
  2. paint a spot on testees head (where testee cannot see it)
  3. when testee revives, supply a mirror

If the testee reacts to his own paint-spot, we call him “self aware” (humans, chimps, dolphins).

Can we design a test for sentience?

Yes, NoCool, but I’m sure you appreciate how easy it would be to program a computer having a digital camera to compare an image of itself to a previous image and

if Difference>threshold
then REACT (using servos and flashing lights or whatever).

The problem here is that there is no real test for human sentience, merely practical thresholds with which to discriminate various states of maturity or mental incapacity. There are, however, specific characteristics of what we think comprises a “conscious” event involving sensory inputs and several levels of memory which might one day be recreated using a device of the required level of sophistication (which is a long way off, by the way).

We must look for this “sentience” in ourselves before we go looking for it in our Pentium chips. If we don’t know what it looks like, how would we know when we’d found it?

According to the paint test, my plug-and-play computer is sentient, or at least self-aware.

You know, a better question might be: if someone writes the mother of all scripts and ends up with a brain emulator, does that not imply not that the machine is special, but that brains aren’t?

Without being able to define what we mean by ‘sentience’, this debate is meaningless.

It is highly doubtful that humans will ever recognize anything that doesn’t superficially appear human as morally equal to themselves. It’s only recently that some humans have accepted the idea that all other humans are morally equal to themselves – there’s a long way to go before people will be capable of responding to something that doesn’t generate the correct subliminal signals.

Agreed, but perhaps a strict definition is not absolutely essential. Perhaps we could pose the question thus:

How could we be fairly satisfied that a bunch of circuitry, biological or not, was suffering the same illusion as us?

I think that in general terms, the kind of things we mean here by sentience (even if this is not the proper definition) would include:
The capacity to recognise itself as ‘me’.
The capacity for something analogous to hopes, desires, opinions.
The capacity to own memories and identify them as ‘something I did yesterday’

In short, the same kind of ‘inner life’ that we believe we have (I’m always a little amused by suggestions that this might all be an illusion in humans - if it is an illusion, exactly who is being deceived?

The biological hard drive holding that unique encrypted string of memories which explains the illusion of identity. “I” am just “my” memories, am “I” not? (this term including the “conditioning” provided by “my” upbringing). Were my unique string to be connected to another sensory apparatus (think “clone of my brain in another body”), there would be way to tell which one was “me”, agreed?

Interesting point, Mange. If two entities somehow hold the exact same memory, surely both entities say to themselves “I” did that yesterday?

Again, this gets very tricky very quickly. How is our preference for a type of cuisine different to that of an amoeba for a certain pH? Surely such whimsical preferences could be arbitrarily “programmed in”?

The very first step, of course, is language. The Turing test is very definitely passable, although of course not yet.