Moral implications of AI

Erratum: there would be no way to tell which one was “me”, agreed?

The illusion, if indeed there is one, is a compelling one though, isn’t it? - I am utterly convinced that there really is a ‘me’ in here and in the end, that’s what we’d be aiming for with AI - not just a machine that pretends to be thinking in every outward sense, but a machine that believes itself to be thinking.

We’ve touched on the idea of making duplicates before and I think that from a purely objective POV, I might just as well be dying every night as I go to sleep and in the morning, a brand new person is born, under nothing more than the illusion that they have always been ‘me’ - indeed this process could be argued to carry on continuously, even when I am awake.

I don’t think we could explicitly program a machine to ‘be’ a person in the same way that we ‘are’ people, not until we can fully understand the processes that cause the phenomenon; in the interim, I think that the best hope for AI will lie in two directions:

-‘Soft’ AI - the machine is merely programmed to algorithmically mimic some or all of the macroscopic, outward actions, affectations and idiosyncrasies of a real or imagined personality/intelligence - this might be a good approach for an AI designed to work as a servant, but I think we’d better not expect it to do anything creative or surprising, except by malfunction.

-Self-Organising AI - where we do nothing more than create a structure (either in hardware or software) that is capable of the same kind of feats of arrangement and connection as is an organic brain, then expose it to sensory stimuli and experiences that will (or so we hope) provoke it to ‘grow’ a mind in a similar way as does a human child. This might be a better approach for an AI designed to be a genuine peer, but we can’t necessarily expect the thing to obey or respect us, or indeed act in any kind of predictable way at all (indeed attempting to do so would be a form of psychology, rather than programming/analysis)

i agree that we need a working definition of what it is about human “sentience” that we consider special. if we don’t know what consciousness is, how can we say that it is what gives us our value for human life?

if we use “the ability to make decisions” as our criterium, surely the thermostat is sentient, when it decides that the room is too cold and that the heat should be turned on.

i feel as though the turing test is still the best we have to go on, and i feel that there can really be no better test without a complete understanding of how human consciousness works. we apply the turing test to every person we meet: the seem enough like us that we can attribute to them the qualities that we feel we have. there is, of course, no way to tell if they have the same qualities. similarly, it would be impossible to tell if a computer possessed those qualities.

personally, i feel that if we ever devised a machine whose inner workings we knew fully and whose intelligence was indistinguishable from that of humans, we would be less likely to treat that machine with the same respect we treat humans than we would be to treat humans with no more respect than the machine.

or at least tells us that it believes so.

Heh, heh - didn’t it arise from a discussion of some stupid Arnie film?

Yes, I think all your conclusions are sound, and the illusion is definitely compelling!

Yes also to Rama (these threads do attract the attention of the “heavyweights” don’t they?) Should a convincing prototype ever be developed, very few would bestow upon it true “personhood”.

Ignoring (for the moment) my notions of what else a machine would have to be able to do in order to be able to do its own thinking and to possess self-awareness – my gut says there’s no defensible reason to refrain from extending such a machine the full considerations due to any other sentient intelligent being.

Problem is, the only ones I’ve had much interaction with have also and simultaneously been members of my own species. That raises the possibility of a conflict of interest (short-term or otherwise) based on allegiances and group membership and whatnot. Naaah, on second thought conflicts of interest based on allegiances etc. have been common enough in same-species interaction. So they’d be “beings” in my book. (And, I would imagine, in their own, and that they’d more or less insist on respect and consideration and legal rights and etc if they weren’t already provided to them in advance).

I’d add:

The capacity to learn on its own and/or expand on its own ‘programming’ (this includes its own internal desire to see this happen and make choices about what data it should seek next). By ‘learn’ I mean more than the simple aquisition of data but also the ability to make intuitive leaps in knowledge from combinations of disparate data (e.g. it takes two apples from entity A and two apples from entity B and without being told it deduces addition and knows it now possesses four apples).

The capacity to care about its own survival and overall well-being.

“What are they talking about…Dave”
“You…Hal”

The problem is, if it passes the Turing test, how do you tell the difference between an “aware” and “sentient” computer/robot and one that is just a cleverly programmed chunk of silicon and metal?

Mangetout and Ramanujan have both brought up the “if it says so” thought. Certainly it would be easy to program my Palm to say “Hey! Leave me alone, I’m thinking deep thoughts in here!” But I suspect we would all agree that this is not evidence of awareness, much less sentience.

Are you referring to an organically grown AI? Or a computer that was programmed to do something else but started to speak spontaneously?

To push the conversation a different direction, why would a sentient computer have, want or need any “rights”? Isn’t that a human concept that might not even apply to machines?

Of course, if we agree with WaM that

is part of sentience (if I understood your remark correctly), then the right to survive would certainly be something it “wants.”

Is there any reason to suppose that we ourselves have the capacity for truly original thought? I know of no credible mechanism to even support “free will”, it being another of the delusions mentioned by SentientMeat.

Machines have already been built that can make inferences.

Some human beings, in some circumstances, seem indifferent to their “own survival and overall well-being”, so is this a valid test of sentience?

Maybe sentience is indivisible, and does not succumb to such reductionalism as a list of properties.

Though sentience is by no means well defined, we are aware that we consider ourselves to posses this trait. Furthemore through interaction with others we can come to the conclusion that those other people also possess sentience.
If through interaction with the AI machine we can draw the same conclusion of sentience that we draw for other humans, then that is a measure by which a machine can be labeled sentient or non-sentient.

Once we create such a machine, we will learn some of the true nature of creation. I believe we will understand that the creation does not owe the creator for the fact of its creation, but in fact the creator is the one that owes the creation every opportunity to continue existance to the fullest of its capability.

Another moral question is constraint- how many electronic imperatives can or should we introduce into the mental achitecture of an artificial intelligence in order to make it friendly/obedient?

Asimov wrote his three laws without any concrete suggestions as to how they should operate inside an artificial mind;
but if they (or something similar) were introduced into a human being, perhaps by super-hypnosis or some kind of conditioning,
they would be considered cruel and tyrannical.

Can we morally place any behavioral restraints on an AI once such a thing is near completion?
If not, would we have to simply trust to chance?

There is always the off button-
or is there?

At the moment I’m talking about a hypothetical AI personality that arose as a wholly emergent artifact of a (hypothetical) ‘brain analogue’ - quite definitely not something that had been ‘programmed’ to ‘do’ (or say) anything specific (I thought I made that pretty clear already).

I may extend it to more formalised programmed systems or simlations if and when I ever become convinced that such would be faithful emulations of real mental processes, but, as I don’t see that becoming reality anytime very soon, I happen to think that rearing an AI has the most promise.

Depending on how the thing is built,(and particularly if the thing is ‘grown’ rather than programmed) it may be just as difficult as doing the same kind of thing to a human - i.e. the more control you try to assert, the greater the risk of permanent damage.

Of course Asimov’s three (or more) laws were nothing more than a well-worn plot device, but I seem to recall that in at least one story (it may have been ‘the little lost robot’) he suggested that they were actually intrinsic to the positronic field equations and were pretty much an inevitable part of the architecture - fiction is often more convenient than reality.

If we truly believe that the mind in the machine is a real one, then the only constraints we can reasonably impose are the same ones we would apply to ourselves (i.e. they would be expected to obey the law of the land or suffer the consequences).

Underlining mine --DG
Forgive me, everyone, for jumping in here without reading down to the bottom of the thread, but I was struck by an idea which may flee blefore I get there.

Mangetout, isn’t the sort of “malfunction” you describe the essence of evolution, analogous to a mutation?

This thread is already much better than I had hoped. I may be masticating this “food for thought” for some time to come. Thanks to one and all.

Now shall continue to bask in the brilliance of my fellow dopers. (Truly, no sarcasm here. I’m about to be 61. My “Intelligence” isn’t serving me as well as it once did, but I’m still learning from better minds than mine. I appreciate it.)

I’m not sure that a programmed response to varying temperatures (turn the heater “on” when the temperature dips below 72 degrees) can be equated with “deciding” that a particular temperature is desirable.

The question of what constitutes sentience is further blurred by the fact that we consider people with Down syndrome, or those with other developmental disabilities to be sentient, though their perceptions and responses are not on a level with the “normal,” at least as far as we know. I realize now that I kind of opened a can of worms here, but I’m really enjoying the responses.

Wonderful!

Possibly, but probably not; conventional algorithmic IF-THEN type of programming, when it goes wrong, tends to just stop working, or produce a result that is useless and wrong, however, I suppose it is possible that a malfunction could cause a program to overwrite some of its own code and if this was random and the population large enough, the very occasional ‘favourable mutation’ might come about.

However, I think it would be quite reasonable to assume that a ‘grown’ AI has some kind of inner sense of self (if it appears that way), since the process of its creation would be similar to our own, whereas a programmed AI (that consists nothing more than a mapped, predetermined set of every possible appropriate response to every possible stimulus) is just ‘going through the motions’.

I agree with everyone that it would be impossible to tell the difference and that in many ways, all we are is just a set of programmed responses, but the big (unanswerable) question is whether creating a set of programmed responses in a machine would result in the existence of a conscious observer - a ‘self’.

While I happen to agree with you that a “grown” AI will probably yield convincing personhood sooner, Mange, it should be remembered that this “growing” procedure is only a programming method at its very heart, ie. like having a billion random programmers wherein the useless bits of code are discarded and the best bits of code are retained. There seems no fundamental reason why the prototype which was explicitly stepwise-programmed should not have this “sentience” thing we speak of.

As I said, from eg. Susan Greenfield’s work it appears that all that is needed for a “consciousness illusion” is sensory input linked to various stages of memory (such that at any given instant, information is being processed from the “future”, “present” and different short/long-term “pasts” creating a “fuzz of time”) with some linguistic ability built in.

When I sit in my chair and ask myself “What am I doing when I am thinking?”, I realise that I only need these three elements to explain my “feeling of sentience”.

i’m not sure, either. indeed, that is the question i meant for people to consider.

what do we do when we decide? we evaluate our options, determine which suits us best based on a set of criteria, and pursue the option we feel wins out. we have goals that we consider desires, we say we “want” to do something because it is a goal. it seems that the goal of the thermostat, though much simpler than any number of goals i think i might have (in fact, the sub-goals might be just as simple as the thermostat’s), is nonetheless a goal. what is it then that drives us to want to call our goals desires, while we are hesitant to say the thermostat wants the room to be warmer?