I’ve come across the idea before that it is inconcievable that machines, AI or artificially constructed entities can ever be considered alive or sentient in the human sense no matter how advanced. A subset of this idea seems to be that because of that definition it would be OK to treat an advanced and apparently self-aware AI or robot in any manner you please.
Personally I find the first idea questionable, although we have as yet nothing even approaching an artificially constructed human-level intelligence, there is nothing to say that in the future it would be infeasible.
If we take it that apparently self-aware and sentient human-level AI has been created do we have any duty to treat it in a humane manner and even imbue it with certain rights or is it merely a tool we can use for our own ends in whichever way we like?
I apologise that this OP is a mess, I’m having trouble expressing what I mean but hopefully some of the intent in my question comes through.
Definitely one of those “cross that bridge if we come to it” situations. I doubt lawmakers give much thought to currently fictional entities! But if someone does build C3PO, then I imagine we’ll have a few decades of legal turmoil and lobbying. Certainly there are sections of humanity who will seek to cruelly treat and deprive of rights any intelligence not described in their magic book, and will feel justified in doing so.
More interesting, I think, is whether we think this is possible? I can appreciate the computing power of machines will one day exceed humanity’s (well, it does in many ways already), and that the debate over consciousness is going to be highly contentious.
I gave this a lot of thought a while back, after reading some opinions on whether Mr. Data from ST:TNG is a “he” or an “it.” I decided that I don’t have a problem with granting individual autonomy to a sentient AI.
I do have a serious problem with conferring gender status upon something with no DNA. So, while LCdr. Data is an honest-to-Roddenberry Star fleet officer, it’s not a “he.”
Sure, but I do think its an interesting question to ponder and that it presents (what I consider to be) a disturbing insight into some peoples thought processes. I would argue that if an entity to the best of our abilities and tests professes self-awareness, insight into its condition and human-like intellect it behooves us to treat it in a humane manner, others appear to disagree.
I discussed this with a computer scientist once and he stated there is simply no way that with current technology a true AI will ever be possible (making faster and more complex conventional computers) but that it might be possible to create an artificial intelligence using some other technological means. That sounds reasonable to me.
I’d be interested in that discussion, if anyone feels its too far off-topic feel free to start another thread, I for one would read it.
The author of http://freefall.purrsia.com/ had his newly self-aware robots solve the problem of gender by having an arbitary dividing line on words used per day, robots who fell below the line were male, those who fell above it were female…(just a playful use of the gender stereotype regarding verbose women and uncommunicative men)
Thats a fun and interesting comic, ignore the ‘furry’ main character if that bothers you (general you) it covers a lot of ideas found in wider science fiction in a plausible and thought provoking manner, and I’ve read quite a bit of science fiction.
I’m sure that machines will some day demonstrate the same level of sentience and self-aware as humans. But that doesn’t clear up issues of morality involving such machines. I’ve mentioned in other threads the issue with machines that could erase all memory of those things we humans call harm. You could cause an intelligent machine great distress in some sense, and then execute a system restore leaving the machine in a state where it had no knowledge or memory of that distress. I’m not sure that machines would even consider such things harm anyway, in that their ability to reflect and reprogram is likely to exceed that of humans. Many things we consider harmful only because they are irrevocable in our minds. Society will arrive at an answer (or several conflicting answers, more likely) when the situation arises.
Thats a concept I haven’t come across before, and I don’t agree. I would say whether or not you can wipe out the memory of the harm or wrong committed to you doesn’t in any sense negate the negative aspects of that harm, after all you still suffered it, and doing something harmful to a sentient AI even if its memory can be wiped later doesn’t absolve a person of the ‘negative karma’ of their action (for want of a better word).
We can do that with humans too - I believe there are certain categories of anaesthetic or sedative that allow patients to suffer, but not remember it afterwards.
I’m not advocating any position on this, but I think it’s something to consider. For instance could a sentient machine volunteer for forbidden experiments knowing that it suffer no after affects? It could be greatly beneficial to mankind, and might not involve karmic repercussions (I’m alright with a non-specific definition of karma).
We already have a test for sentience - though it might have to be improved. For a test of rights, how about whether the machine has a true understanding of its own mortality.
Wiping out the memory of a computer is killing it in some sense, just as wiping out the memory of a human would be killing him to all intents and purposes. His friends and loved ones have lost him just as much, even if the body remains functioning.
(As for Data being a “he” we should ask Tasha Yar.
I’ve heard of that. Certain drugs often cause short term amnesia, and when surgeons think they’ve screwed up the anesthesia they’ll administer those drugs in hopes the patent won’t recall what they’ve been through and therefore won’t sue.
I don’t know whether surgeons actually do that, but the mechanism is not well understood with these drugs, and I suspect it is nothing like performing a system restore on a computer. Most people would still consider causing human suffering immoral even if the effects can be erased, but it is not clear that will carry over to machines.
Why is DNA the standard for conferring gender? In the real world, plenty of critters don’t have gender, despite having DNA. And in the Star Trek universe, there are presumably plenty of gendered species that don’t have DNA. Hell, that’s almost certainly true in our universe, too.
Hell, even the ship has a gender: don’t they usually refer to it as “she?”
For that matter, what about Jean Luc’s native tongue? Every goddamn noun in that language has a gender.
You’re confusing gender with biological sex. Of course Data is a he. His language, his clothing, and even his sexual behavior (he’s made love and he’s dated women) are all excellent evidence of Data being a he rather than an it. Hell, he’s “fully functional.”
I’m not sure that this should enter the discussion at all… There’s research on the possibility of selective memory erasure in humans, and if one doesn’t want to argue that availability of such a drug would, say, make rape OK (which I know you’re not), then I don’t see how one can consistently make the case that in the case of an AI, the same possibility does make a difference.
In any case, I have a problem with judging the morality of an action by its consequences – if a mass murderer accidentally saves somebody (or more than one person) by killing a person who was just about to kill someone else (no time to construct a better example right now…), it doesn’t make his act morally right, just because through it, an even greater harm was averted – it was not his intention to avert, but to cause harm.
Of course, judging the morality of an action by its intent has its own problems… Which is why I tend to favor evolutionary morality/universal moral grammar style approaches, in which morality is more aptly seen as a faculty of judgement, akin to the grammatical faculty of judgment (which decides whether or not a sentence is grammatical), that does not derive from a single, unifying principle, but rather, evolved according to the needs of social animals. From that point of view, we’ll see how we end up treating machine intelligences; however, I think there’s already good evidence that we won’t make that much of a difference: already, we tend to personify our little mechanical helpers, like laptops or cars, and develop certain attachments to them. Also, for example, pictures of disembodied eyes, put up in stores, tend to discourage theft, even though intellectually, we know that they don’t actually ‘see’ us; but we don’t seem make the distinction between ‘actual’ and merely ‘virtual’ agents at that level.
In the past I’ve argued that regardless of metaphysical questions about personhood, sentience, etc, we should treat something that acts like it’s sentient as though it were sentient, simply because it doesn’t make sense to treat things differently which behave similarly. (And granting rights is part of how we treat sentient things.)
That’s too simple though. Behavior isn’t always the only criteria for how to treat a thing, and also, one doesn’t know whether a thing “acts like it’s sentient” until it’s finished acting. (For all you know, I’ll suddenly start yelling “processing error! processing error!” and shut down at any second. You won’t know I’m “acting like I’m sentient” until I’ve died and finished all action.)
So I still have a strong feeling that certain kinds of machines will deserve human rights, but I don’t have a good argument for the view.
Witness the love and attachment to the Weighted Companion Cube in the Portal games, which is nothing more than an inanimate box with hearts painted on it!
Is the question whether or not life must be biologically based to be sentient, or should such “life” be granted the same rights as biological life? They are two different questions with different answers in my opinion. To the former, I think the answer is a categorical “no”. Any machine that is sufficiently advanced enough should qualify for sentience. The particulars will be sticky to work out, but we will get there. On the second point, I think the answer is also “no”; but for differing reasons. Much of our concept of ethics and criminality is based on the concept of irrevocable Harm done to another sentient. A machine cannot truly be put to pain, permanent psychological damage, or physical destruction. All you need is a back up of the “life” and new hardware. I would support some sort of limited rights to prevent the demented from deliberately torturing sentient AI’s but not granting them the same rights as a biological blankly. Should we develop our technology to the point where everyone keeps a backup of themselves ready to download into a new biological body I would support a re-evaluation of human rights as well.
There is a fundemental conflict of opinions on this issue though, I (and others) would argue that a ‘backup’ of a persons memories is not the same person from the instant the ‘original’ begins to diverge from the ‘backup’. Its not a matter of a person backing themselves up and then going on a super-extreme adventure holiday safe in the knowledge that if they are killed then there backup will be activated and they will be alive again.
A copy of them will be alive but they will be as irrevocably dead and gone as if they have never had a backup in the first place.
I realise that both sides seem to struggle to see the oppositions viewpoint on this issue, I certainly do.
And I would argue that the same rules would apply to copying and backing-up sentient machines.