Life and sentience must be biologically based? Or AI rights

I think the AI will be able to tell us.

It isn’t that odd when you consider the basis of our systems of ethics. All systems are based upon reciprocity and the avoidance of pain. It is wrong to hit someone because it hurts them. Nearly everyone does not like to be hurt, so hitting is bad. As adults we’ve codified and complicated this example with endless degrees and exceptions but the basis still stands. It is wrong to steal because it deprives another of their fair share. Deprivation causes unnecessary suffering and since we all would like to avoid that, it is an unethical behaviour.

Now consider that in a sufficiently advanced society or in the case of an AI, Harm carries no real consequences any longer. If we do not program AI’s to feel pain or an analogue of such then we cannot “assualt” them in the classical sense. If we program them to be receptive to any sexual advances or practices they cannot be “raped”. Unlike humans which develop cultural practices as a direct result of our unchangeable biological hardware, AI’s can be completely different creatures to start with.

Let’s consider though that we have an AI that has suffered a “traumatic” event by the standards of an AI. The result of which is a problematic change in responses, or “personality”. If we can simply remove the offending event, then the harm never occurred as far as the “victim” is concerned. The AI is simply reset to an earlier version that did not experience that event and it continues to go on with it’s existence as usual. If this process is consented to and understood, then we have to re-think our entire definition of rights in regards to these beings. That is not to say that they should not have protections and people should be free to do whatever they like to AI beings; but rather that our standards and practices must be different because you cannot create lasting harm to such a being.

Should technology improve to the point where this is common, accepted and true of humans, I would say the same thing should apply to biological beings.

Some will believe that it is immoral to simulate causing harm to AI even though no actual harm is done. They might say this encourages the harmful behavior that will eventually be turned on humans who can actually being harmed.

Please note I’m not endorsing this view. But some will believe this to be the case. Humans haven’t done a good job of deconstructing morality so far, and I think the existence of AI will make things worse at first, not better.

I’ll hold that view, at least to a limited degree. I believe that one of the reasons that violence is bad is that is harms the person who engages in it. Obviously, it harms the victim far more, but it does some harm to the violent person also.

This is why, even if the memory of the pain is removed entirely from the victim, the evil will still have been done and be remembered by the perpetrator and by civilization itself.

(To be extreme, what about the genocide of the Huguenots? No one alive today suffers from that pain. They’re all dead. So…is the evil meaningfully real? If I rape someone…and then kill them…where is the pain?)

I would think that operational AI might be a wonderful “laboratory” for morals and ethics, allowing us actually to perform such experiments and derive knowledge from them. We could create specifically limited AIs, and do horrible things to them, much as we do now to laboratory rats.

Yes, some – many! – will cringe, just as they do now regarding rats. But it might lead to the same kinds of benefits. The rats suffer, but humanity as a whole benefits.

That said, I think that real, working AI would allow us to explore moral issues without contrived horror. We could explore more positive aspects of consciousness, such as creativity, humor, teamwork, etc.

Yeah but I was just saying how would we know? Perhaps all we had created would be an “act-like-you’re-conscious-motron”, or a p-zombie, or whatever.

Of course, I don’t know if you are a p-zombie either, but taking this reality at face value and assuming I’m a typical human, it makes sense to assume that.
With AI, who knows?

Cloning.

Not at all. But I don’t. I also don’t say “Yaaarrrrr” on a regular basis.

I thought it had been established that it doesn’t actually have feelings. And I’ve already stated that when addressing it directly, I would use second person pronouns where the subject matter called for doing so.

Okay, ew.

Not really my problem, though.

I’m sorry, I can’t do that, Dave.

Okay. But in the real world, we can’t do that yet. Maybe, we can’t do that ever. Should infertile people today be referred to as “it?”

Keeping to the sci-fi example, though, how is cloning a person any different from building a second data from the same blueprints?

As of the first movie, he has the full spectrum of human emotions.

And I wasn’t asking about second person pronouns, I was asking about third person pronouns.

That hasn’t been established, as far as I can see. It may be that Lt Cmr Data feels more strongly about events than humans do; after all he/it can record all his/its internal processing and feel it all again on demand. He/It can recall all of the relevant information surrounding any particular event and endlessly analyse it unless he/it finds a reason to stop, or is given one.

The sentience and consciousness of an AI may be profoundly more complex than that of a human. Or they might be far more obsessive, or anxious, or even callous, about events, depending on how they are designed and how they change during their existence.

There exists a concept called ‘Mind Space’, which sounds a bit daffy, but is intended to describe the wide range of possible mental forms that mind and sentience may take. We are accustomed to human minds being broadly similar to one another (although at times I am pleasantly amazed at their diversity) but I think that once we start making artificially intelligent entities we will find that they can occupy a much wider region of ‘mind space’ than human minds do.

Will they even overlap human mind space at all? Perhaps not much, if at all. But it may be instructive to consider which features an AI might share with humans.

Would they be self-aware? Not necessarily. Alternately they could be intensely aware of their internal processing.

Woul they feel pain? Not necessarily. Alternately they could suffer agony even if they only make a numerical error.

Would they have sexual identity? I’ll have to pass on that one, although Futurama has some amusing things to say on the subject.

Would they seek to preserve their own existence and pursue self-interest, however enlightened? Not necessarily, although AIs which don’t obey the Third Law in one form or another wouldn’t last very long in a dangerous environment.

That’s a difficult question, and not answerable here, but I mean a combination of the power of thought and self-awareness. For example, I solve many problems subconsciously, and I think most people do. The processes that do that are very high level, in the sense they solve very difficult problems. But I have no visibility at all into the process until the answer pops up. Is that sentience to you? I consider that these subconscious processes are very much the way a dog operates. My old dog could solve difficult problems, plan, and even abstract, but I saw no evidence he was self aware.

The purpose of the Turing Test is to test for sentience, not humanity. The reason the identity of the other party is disguised is to prevent people from being biased, either in the obvious sense of “no machine can think” or in the more subtle subconscious sense. There is no difference really if the machine is in a large, immobile mainframe, as a traditional metal robot, or humanoid like Data but with better skin color.

In a Turing test the sentience is determined if a human can’t tell the difference between interacting with a human and a machine. I think that is only an initial determination of sentience base on a blind comparison. Sentience that can be established by humans by direct query of a machine is a higher standard. The machine must convince the human it is sentient despite being a machine. Call it what you will, there’s a difference there.

I’m going to have to respectfully disagree with you on this point. We’ve actually done an excellent job dissecting ethics; it is merely that the answer is not very *emotionally *satisfying. Humans, in all our messy, wonderful, biological complexity process things on many different levels simultaneously. We don’t just perceive a “wrong” on a logical level, but feel it emotionally as well. It is that irrational, unreasonable part of us that we must try to limit as much as possible when talking about a system of ethics that must apply to us all. How much more difficult will this be when dealing with a mind likely to be very alien to our own?

The only solution I can see to this issue is to detach as much as possible from the emotional and embrace what I would call the “compassionate logical”. An AI is not a human, no matter how closely it may resemble our own processes. An AI that can be saved, edited, or placed into any number of bodies is an immortal. We have to carefully, logically, and compassionately create a system of ethics for dealing with such beings. One that will meet their needs, rather than satisfying our sense justice.

Ok, that’s a reasonable way to state it. Certainly some people have a rational and consistent understanding of morality, but certainly all people do not. And I’m not convinced we have established a lasting trend in this area.

I do find this unlikely to happen in an orderly manner, and I think AI will make it more disordered. But these are opinions about society, and I’m more interested in the development of AI than the social impact. I just mention the social issues as a side note to my main point, that AI will not arrive in humanoid form, and the resolution of moral issues will be more than simply reaching a point where an AI is considered the equivalent of a human.

I’m not so sure. In order to un-want, you have to want to un-want, presumably. I guess basically, it’s just linked to the intentionality all conscious states are supposed to possess – if you have that, I think there’s a way to inflict harm, so there’s some inherent link here. But I’d have to think about it more in order to work out a more coherent argument, which I don’t really have the time and energy for right now…

I didn’t really meant to enter into a qualia debate with that comment (besides, I doubt their existence anyway). I just meant that there is no way by which to compare mental states – it might be that to you, a tiny papercut hurts more than anything I’ve ever experienced, but you’re also such an incredible badass that it doesn’t show.

Essentially, I think that if you can convincingly fake consciousness, you can create consciousness for real – you just need to fake consciousness to yourself in order to be (or believe yourself to be) conscious. In any case, since we know that consciousness is possible for a certain kind of architecture, comparably complex architectures seemingly endowed with all the facilities of consciousness ought to be judged conscious, as well – it’s not only the standard by which we judge other people to be conscious, but by which we judge ourselves conscious, as well (we convincingly appear conscious to ourselves – where that ‘self’ only emerges because of us so appearing --, so we believe that we, in fact, are conscious).

That’s not the only way to build a system of ethics. One can also judge those actions as bad that are committed with the intent of causing harm, for example – in such a case, whether or not an AI can delete the memory is wholly immaterial. Or there are deontological models, in which there exists a set of fixed rules governing what is forbidden, permitted, obligated etc., as is the case in most theologies. And then there’s the whole already mentioned evolutionary approach, in which what is morally right is largely determined by what we judge to be morally right.

But there’s a possibility that it might not be possible to program AIs to not feel pain, or analogues. At least to me, it seems obvious that deprivation is possible for every thinking, feeling being.

But it did occur as far as the perpetrator is concerned; so if you consider the moral value of an action to lie in its intent, then it’s just as bad.

A Turing test can be incompetently administered, I suppose - Turing, like many geniuses, probably never considered people botching it. But we all do Turing Tests all the time. While we assume that people pass, back in my Usenet days there were sometimes accusations that a particular eccentric poster was actually a bot.
Was HAL9000 sentient or not? It was unclear in the original book/movie, though he could easily pass a Turing Test - though as Poole I think said, he was programmed that way. (In 2010 we learn he is.)