Life and sentience must be biologically based? Or AI rights

I think many in this thread are assuming that an AI would experience similar suffering to a human. But there is no reason to assume that.

So why talk about wiping the memory of, say, terrible pain – just don’t program pain into the machine in the first place. Just have some sort of damage avoidance instinct instead.

Conversely, there’s no reason to assume that we experience maximum suffering either. Perhaps I could make a machine that could experience more pain than all humans than have ever lived put together. What’s the moral implications of that?

But, I don’t think we understand consciousness very well at all. In fact, I don’t think we’ve made any progress on the hard problem of consciousness, other than ruling out certain theories. I hope we’ll understand consciousness better prior to creating it, but it really doesn’t look like that’s going to be the order of events.

It can’t pass along its “he”-ness to replicant of itself by contributing its own genetic information.

I’ll call it “you” to it’s face (or on a phone call). I won’t call it “he” when I’m talking to other people about interactions I’ve had with it.

As to the bolded portion of your quote, I’d counter: does on call one’s sex toys “he” or “she?”

Well, I know that guys with RealDolls call them “she,” but that’s just the kind of creepiness that supports my point of view.

Well couldn’t you just ask a sentient machine, “Do you object to what I’m doing to you?”, if they don’t like it they’ll tell you directly!

But I agree its possible that it will be difficult for humans and true AI to have many frames of reference in common, or we may find out that they’re pretty much just like us.

I’m hopeful of the ‘Culture’ future of benovelent AI’s rather than the ‘Terminator’ future of malign ones (although Terminator 2 did explain that the AI Skynet acted in self-defence, a very understandable gesture) although the one thing about the future we will actually get is that its unpredictable (beyond ‘We create AI’ or ‘We don’t create AI’)

Well, we talk about masculine and feminine attributes in people beyond their physical sex, why wouldn’t we permit a sentient AI the common courtesy of referring to it with the masculine pronoun if thats how it wishes to identify as an individual?

btw on a sidenote there was a quote I read on this forum some time ago that’s stayed with me (approximately), “Every person exists on a sliding scale of masculine and feminine attributes, a totally feminine woman or a totally masculine man would be nothing more than a horribly one-dimensional stereotype of a human being.”

Kudos to whoever came out with that one.

I think you’ve got it right here. I don’t think machine sentience will arrive in humanoid form, physically, mentally, or emotionally (if it is emotional at all). It would take effort to make machines that are so identical to humans in that way. A Turing test is not going to determine sentience for machines. A machine will be able to say “I am a machine, and I am sentient”, and humans would still be convinced through interaction that it is sentient. There will be a question of the morality of making a machine that does feel and experience the world in the way humans do since that doesn’t seem like the natural course of events in machine development.

I agree that we may have a disconnect here. What is the difference between the copy and original?

One went on vacation. The other didn’t.

A backup copy not being the same person–that isn’t something that strikes me as controversial.

What would be more interesting (to me anyway) is the question whether an exact copy of my mind at the moment of my death would constitute a continuation of me were it to be made to inhabit some body after my death.

No? Any reasonable Turing Test would include questions about the self and self introspection, like “what are you thinking?” It is obviously not a perfect test, but then we can’t perfectly test each other for sentience either.

This is also the transporter question. If you are disassembled and reassembled elsewhere, are you the same person? Does it matter if the same bits are reassembled, or different ones?

I don’t really pick a side in this argument. I’m like Bones, you’re not getting me in one of those things.

Just to clarify, what do you all mean by “sentience”?

The reasoning behind the Turing Test is that it is impossible to determine sentience in anything, human or machine, and that a machine which could mimic a sentient human closely enough is effectively sentient.

A machine may not experience pain in the same way we do, but for us pain is just an interrupt. Perhaps a sentient machine will be designed so that a thermal diode indicating an overheating position would signal something very much like pain in order to force the computer to stop whatever it is doing and deal with the problem. So, perhaps we should consider someone faking thermal diode alarms to prevent a computer from concentrating on its assigned task as being bad.

If these options would become available for humans, would that mean that the morality of certain acts committed towards them suddenly changed? Seems odd to me to have a morality contingent on technological/biological feasibility…

But in order to have that achieve its desired effect, it would presumably have to involve a stimulus the AI would seek to avoid. Certainly, a pleasurable feeling would not have the desired effect! So, any sort of such ‘damage avoidance instinct’ would constitute a kind of pain response, as the infliction of damage would incur a wish in the AI to remove itself from the situation.

And besides, there’s much more to doing harm than just causing pain. In general, any act contrary to the wishes or needs may be considered ‘harm’ in a general sense; so I think sentience entails some form of susceptibility to harm.

I don’t think there’s any useful metric for suffering, so I’m not sure it makes sense to talk about ‘maximum suffering’ and the like.

I have hopes for things working out the other way around: creating consciousness will help us understand it. After all, that’s the order things normally happen in. (I’m also not sure there’s any ‘hard problem’ at all, but that’s of course a whole 'nother debate.)

In the Turing test the machine cannot reveal that it is a machine in a credible manner. Otherwise it can be distinquished from a human. I think the higher standard is being unable to establish the substantive difference between the human and the machine even knowing which is which. That is the level that I think people would need to assume for moral/legal/ethical purposes that a machine is equivalent to a human.

I don’t really care that much about this aspect. Personally I don’t think a machine should be considered human. But it’s not like I’ve carved this into stone either. At the moment it’s not something I have to worry about.

As mentioned earlier, I doubt the evolution of sentient machines will produce something very human like. So much of this argument only applies to later developments where machines are intentionally made to be indistinquishable from humans beyond the level of sentience.

Has it suffered? Do its senses lead to some sort of distress, and does that distress lead to cognitive difficulties. In general, I don’t think machines we build to serve us will be programmed to suffer. They will probably be programmed to say, “thank you,” and “yes sir,” etc. and just move on without brooding over it. You could probably hack off one of its limbs, and it would just recalculate how to perform the tasks it’s performing. Sentient, yes. Highly intelligent, yes. Suffer? I doubt it.

Partly because of:

But, also because that won’t be as viable as a product for our use.
Now, it may be possible to send its cognitive functions into the sort of loops that correspond to mental disease in humans. But, for the most part, they won’t suffer. Their productiveness would be impaired, and it would be something to be adapted in the program.

I expect they will be programmed to avoid destructive loops or “mental problems.” Just pick an option and move on without brooding over it. Don’t fret about whether it should have made a different decision. Even if it later turns out that a different decision would have been better, I don’t think it will be emotionally crippled by it. It will just file that information, and use it to make better decisions next time.

Not necessarily. We have plenty of behaviours that are not based on pleasant or unpleasant sensations per se.
Of course, it may be the case that for a complex but definite response to nociception, in a conscious organism, something like pain is necessary. But we don’t know that.

Although that harm may be very different to the level of harm done to a human being, and that was my point.

True, but the possible dangers and rewards here are particularly great.
And by “danger” I don’t mean just for us; we also need to consider the minds we may be creating.

How do you define ‘levels’ of harm in any meaningful way? Certainly, if there are things an agent wants, there are ways to cause it harm, through deprivation; and I’d think that sentience entails wanting. So I think qualitatively, there’s no difference between brain- and chip-based sentiences; perhaps there’s a quantitative one, but again, how does one access this? It seems akin to saying ‘my blue is bluer than your blue’, when it comes to perception.

One could perhaps measure the intensity of some avoidance reaction, but even this can only be a secondary measure of subjective harm. A curtailed range of avoidance reactions does not imply less susceptibility to harm – that a paraplegic does not run away from the fire doesn’t mean that he fears it less than I do!

What drugs are these you speak of, sir?

That the instant the copy or the original experiences different input they diverge and become two different people. If both copy and original are up and running at the same time you have two different people who are very similar but not the same.

But as I said, I’ve seen this question argued extensively in other places and threads and the two sides can’t seem to quite grasp the others view on the matter.

Human brains have limits to malleability that machines may not (I think will not). The artificial sentience can want something for a specific purpose, then un-want it just as quickly. Humans can’t do it that readily or reliably. As far as the qualia stuff, there’s no doubt that people perceive things differently, but the significance of that is in doubt. Yes, an artists perception of the color blue can make a big difference, but that doesn’t require a magical explanation. The machine with the more sensitive optical sensors will perceive blue better than the one with the cheap knock-off sensors. Whether it happens in the eye, or further along in the brain in humans, it doesn’t mean very much about sentience in general.

By that logic, an infertile human would also be an “it.”

Do you similarly object to people referring to the Enterprise as “she?”

Also, to what extent would you take Data’s feelings into account? If he told you that he finds it personally insulting to be referred to as “it,” would you still insist on calling him “it” regardless of how he felt about it?

Yes, that’s also pretty common.

I’m not quite sure I follow. Certainly there are levels of harm; you can certainly harm me a little or a lot.
How we’d know how much we’re harming an artificial intelligence is very difficult. Heck, I don’t know how we’d even know for sure that an intelligence we create is conscious.