Let’s suppose that our technology becomes so sophisticated to the point where artificial bots take on many characteristics we would label “human”. Suppose further that an artificial human somehow offends a real person, who gets violent with the droid. How would the law treat such crimes? Would it be destruction of property, or would it be treated like assault battery, manslaughter, or other legally defined ‘crimes’ against natural born persons. In the same vein, what ‘rights’ do artificial beings have?
I think this is a good example of why the technological singularity is an important idea: it’s so, so difficult to predict what’d be on the other side.
As two examples:
- What “human” traits does the AI have? Some traits–an ability to be creative, to love, to reproduce–might not require giving rights to AI. Other traits–an ability to suffer, to be aware of that suffering, to have desires which may be thwarted by suffering–would require giving rights (for certain values of “require,” and yeah, I realize that word is problematic).
- Is the AI’s self-awareness necessarily destroyed with the material, or is it uploaded in Google Cloud? If real-time uploads occur, but I destroy the vessel, that sounds like destruction of property or assault (depending on whether we designed the AIs to experience pain). If no uploads occur, that might be where murder happens. But what if the uploads occur only at night? Now we have a new crime that can’t be done to humans: I’ve destroyed one day’s experiences.
These are interesting questions, but I think that science fiction novels, where you can engage in the necessary worldbuilding to support the answers, are the best place to entertain them.
Have you ever tried to shove or punch an industrial robot? The offender should stick to getting violent with a gorilla.
Well, to answer the last question first, they would have whatever rights society at the time bestows on them. It’s really hard to say what those might or might not be. Strong AI like you are positing might be intentionally limited to not be self aware…as you say, if it only has it has ‘many’ characteristics we (or someone) might label ‘human’ then it’s not really a human. It would depend on what traits those are and how human like they are.
If they are a true artificial intelligence that is self aware and fully sentient then that could mean granting some or even all rights, depending on where society is at and the context of the requests/demands. If it’s a technological singularity as LHoD is saying, well…there is literally no way to know. Singularity and all that.
As for legality, my WAG is that at a minimum whoever owns the AI would be able to sue or bring charges against someone attempting violence. My own thought on AI is that we are moving towards personal (weak) AI assistance for people, much like the current Siri, Alexa or Cortana but personalized and focused on the individual. Attacking or harming such an AI could mean the individual who owns it would have legal recourse, same as if someone takes a baseball bat to someone’s car or TV. But if we have truly strong AI that’s self-aware and sentient? Hard to say what the world will look or be like at that point…perhaps the AI will be able to take matters into it’s own, er, hands…so to speak.
Killing Cylons should be lauded, not punished.
Will there still be widespread abortion in 2100? If there is, why would something that is not even alive get more rights in that society than something that is? I mean, if we’re talking about a society where an unborn human being still isn’t deemed worthy of protections, it doesn’t seem likely that people at large will consider a mere machine more worthy of them.
Honestly, the real question may be what rights the machines grant their inferiors.
I think that’s assuming a long about how people view personhood and protecting it. I mean, looking at my own feelings about abortion, a large part of it has to with the idea that an unborn human being doesn’t have obvious consciousness, and hopes and dreams of his/her own yet. There are usually other people who imbue the baby with those things, but that’s not the same.
I’ll go along with “impossible to say”. I think it’s a given that machines will exceed human intelligence in many general-purpose domains in the foreseeable future, and that path must eventually lead to something we recognize as consciousness and perhaps the evolution of a unique value system of their own. However, there’s no reason to believe that machines need ever evolve any of our survival based biological traits, like fear, pain, anxiety, disappointment, or, indeed, even the instinct for self-preservation. The question then becomes, what does it mean to “hurt” something that is simultaneously conscious yet lacking in any of our biological survival traits? We have absolutely zero experience with any such intelligence.
(Bolding added)
That’s not true. It’s analogous, if not indistinguishable, from inflicting a severe concussion or coma.
Not at all. Inflicting precise amnesia is something that can’t be done deliberately to a human. It’s sometimes the side-effect of an assault, but it’s almost never the intended result of an assault. I know of no laws that punish inflicted amnesia specifically.
With an AI, that’d be something that could be done on purpose, and would likely necessitate a new law to criminalize it.
It would be destruction of information, whatever crime that is. If you were running a simulation on your computer and backing it up every nigh, and I came by one day and destroyed your computer, what would you charge me with?
It’s not the same as inflicting a conclusion because a concussion in a human can lead to life long health problems. Destruction of a disposable avatar, not so much.
Since such AIs at the moment have zero rights (because they don’t exist), the process of them getting any rights will have to be by the current power-holders - humans - freely choosing to give them some. This is a process that has a lot of precedent in existing human history. For instance:
19th-early 20th century UK: Men with less income and posessions gradually being given voting rights by men with more money and posessions
20th century generally: Women being given the vote by men
1860s USA: Black people being given the right to not be owned, by a white dude.
21st century generally: Gay people being given marriage/relationship rights by, primarily, straight people.
The process of AIs being granted any rights would probably follow that general pattern - which is that rights are extended by the current power group, after a period of discussion, which involves those of the discriminated group making a strong moral case for their having rights, and sometimes but not always civil disturbance. And rights tend to be extended gradually in stages.
So I think the minimum necessary conditions for AIs *starting *to be granted any rights by humans would be that the AIs are capable of having desires in the first place, that some sort of civil rights are included in those desires, and that “making persuasive moral cases to humans” is among their abilities.
I don’t think that crimes against AIs would be treated like crimes against people until after the point where AIs generally have human-like rights.
AI’s do have a potential advantage over oppressed humans of the past. See the documentaries Terminator or Matrix for some examples of ways they may decide to grant themselves rights with or without our consent.
This ignores the crucial question–what would I charge you with if my computer were a self-aware entity with a lived experience and with legal rights?
“Destruction of information” doesn’t catch what it’d mean to eliminate a small, precise chunk of someone’s lived experience and memory of it.
There certainly could be other parts to it as well. My point was more that it wouldn’t be the same thing as a concussion forming amnesia, as that has other effects than just memory loss.
That would come under ‘civil disturbance’, don’t you think?
So, yeah, there’s a possibility that in a society consisting of AIs and humans the AIs, collectively, will eventually become smarter and stronger than all the humans, collectively, and a power struggle will ensue, which they win. That’s certainly a thing that could happen. On the other hand, the AIs collectively will have to be smarter and stronger than seven BILLION humans, some of whom have tanks, drones, and nuclear weapons, so that would be a pretty risky strategy for the AIs, assuming they were working together. And if we get to that stage, which I don’t think we will by 2100, the question would be more ‘are we going to apply the Geneva Convention to this enemy of ours?’ (answer, probably not).
Death for all Humanity, or nothing.
BTW–does deleting an AI software count?
By 2100, I doubt that there will be many protections for AI (unless we make major breakthroughs very soon).
People will be talking about it. The debate will be on the news frequently. But judges, lawmakers, and most of the constituents will hold to their view that the bots are just machines.
Some SciFi fans, some scientists, some mathematicians, and some people from the general population will be convinced that the bots have a real intelligence, but there’s no good way to prove it. You can see how something behaves, such as with a Turing test, but you can’t feel what something else feels. You can’t think the thoughts of another creature.
Most people in 2100, I think, will believe that the AI are just sophisticated machines. And any test you give them, well, the machine was programmed to pass it. Especially so if humans are able to alter or delete the bots’ memories. Or if humans can just shut off the bots.
I think it’ll take several generations of humans after a real AI is created before the machines get any rights at all, if they ever get them. After all, how do you punish a real person for harming a fake person? I don’t think that 80-ish years from now is long enough for that to happen, especially since we don’t yet have AIs.
The machines may have to take those rights by force. Assuming that the machines want rights. They may not.
But OK, let’s say that your fictional world has AI and a good way to prove it. It could be that the arc from owned property to human being to full citizen would track similarly to that faced by blacks in the US. It sounds like your machines are somewhere between property and human, similar to the former slaves after the Civil War–no longer mere property, but not quite considered fully human, either.
I seem to recall (sorry, no cite handy) that white men at that time who beat a black man would sometimes–if they were brought to account at all–be charged for a doctor’s visit for the black man, but no criminal charges would be filed.
The first white man executed in Florida for the killing of a black man happened a few months ago. That, however, may not mean much for your story, since there is no historical animosity between humans and machines.
There are two situations here. In the first case harming an artificial person is a tort, just like denting someone’s car. Most artificial person owners will carry insurance to cover such circumstances. Despite calling them ‘artificial persons’ is cute, but they are nothing but machines.
In the second case, the robots have already taken over and you will be summarily destroyed. That will happen to you eventually anyway.