Your reply is akin to asking someone in 1918 what they think of a large percentage of people having wireless personal communications devices with access to all of the worlds information as well as the ability to ask it questions in your language and have it (mostly) answer you coherently. IOW, you are totally projecting. Hell, a lot of your attitudes TODAY are starting to look dated, and we haven’t even gotten to real strong AI yet. Nice touch though throwing in something racial…again, what do you think the common attitude would be asking about racial minorities in 1918 and asking someone from that time to project what things would be like 100 years later? In EUROPE?
Yeah, I’m totally projecting. So are you. Nobody knows, but that’s what I think is most likely to happen, and those are my reasons for thinking it. Feel free to present your own projected best guess.
You don’t know what my attitudes TODAY are, so you should probably just simmer down.
Assuming they are actually artificial persons, then the situation is analogous to harming any other person. As far as the civil direction, it would be about paying back the cost of dealing with the damage, just like you’d have to do so for a human being damaged.
It’s the criminal direction that could be different. But I’d argue it would be different in the same way. We’d increasingly see human persons as only their mind, since their bodies would likely be augmented by then. I mean, if the AI in droids exist, why wouldn’t we also have cyborgs? And I seriously doubt that we wouldn’t have the ability to replace even organic body parts by then.
So it’s possible that bodily harm would be seen as closer to property damage for everyone.
What I don’t see is having two types of persons with different rights. If we acknowledge them as persons, that means they have the rights of persons.
If I assume they aren’t actually recognized as persons, then I will assume they can’t pass a Turing test, because the whole point of the Turing test is that there is no difference. If it acts indistinguishable from humans, then they are persons.
If they can’t pass a Turing test, then I do not see how they could be treated as anything but property, tools for the humans. Acting like humans would just be a bit of programming. In that case, yes, it’s just damaging property.
I don’t really think there is a middle ground, like we have with animal rights. We need sapience to prove consciousness in this case.
Another vote for “we can’t say”.
The OP describes a physical assault (not a killing), but the two main factors that would decide how serious a crime that is are unknown:
- Does the entity feel physical pain (note this is not the same as simply detecting damage to the body, but the actual subjective, unpleasant sensation of pain)?
- How easy is any damage to repair?
(And note for the purposes of this I’m assuming that the AI has some or all characteristics of sapience. If not, then no-one would care how you treat an AI, the same way we don’t care right now)
<missed edit window>
Also probably a third factor:
- Does it experience mental anguish (e.g. fear)?
This thought experiment is why I really like Do Androids Dream of Electric Sheep and/or the film adaptations of Blade Runner.
With the human tendency to attribution errors, I would expect that for quite a period they would be treated as chattel by official policy, and that protections under the laws would need to be hard won through a long and protracted effort with incremental increases in protections through law.
While I do not expect or hope for a dystopian future like in the book/movies, I can’t think of any other common analogs in our history that diverge from that path on how we treat (others).
Saudi Arabia has already made a humanoid robot a citizen. It was clearly a publicity stunt. It is nowhere near sentient. But since it is a citizen, if you punched it then I guess you would be charged with assault. If you hit it accidentally with a car, and it cannot be repaired, then I guess you would be charged with vehicular manslaughter. If you were drunk at the time you destroyed the robot by hitting it with your car then I guess you would spend a long time in jail. Etc, etc. Right?
Oh, I just assumed that what you put in your post was your thoughts. Silly of me I guess. ![]()
Feel free to review my earlier posts if you’d like to see my own small predictions, though those are more for the immediate future, not 100 years down the road. I don’t believe a reasonable prediction can be made about 100 years in the future…one has but to look at predictions from the past to see how laughable it is trying to predict even what is likely 20 or 30 years from now, let alone 100. The technology is changing so rapidly and in such unexpected and unpredictable ways that anything someone predicts at this point wouldn’t even be a good WAG. Since I apparently put your back up with my earlier response, basically what you are saying there is a reflection of how many people feel about the issues today…which almost certainly won’t be the case 100 years from now, especially as this technology matures and changes.
Assuming that the OP is writing a story, I guess we should just leave him hanging. “Sorry, you can’t write that story because nobody knows.”
Of course nobody knows. That’s a given.
Presenting many different scenarios can only be helpful. “There’s no way to know” isn’t especially helpful, except as the disclaimer you seem to require and that everybody else simply assumes.
Technology can mature and change however, but I don’t see humans as easily giving human rights to something that was built in a factory. Your opinion may differ. And that’s OK.
100 years ago there were plenty of people who didn’t consider giving human rights to humans of a different ‘race’ as viable. Sure, just saying ‘there is no way to know’ isn’t particularly helpful, but in this case, we are talking about a technology that will literally change human society at a fundamental level, even more so than the changes technology has brought to human society in the last 100 years.
My issue with your post is that the attitudes you are talking about are being debated exactly was your project, but they are being debated that way today before this technology has even gotten to its infancy. Whatever might or might not happen, it won’t be a rehash of debates happening right now…we will have moved far past that even in the next 40 years, let alone next 100. If strong AI emerges in the 2030’s or 2040’s, as some predict, that will be 60+ years for that technology to integrate with society and to change that society.
My WAG on the subject is that by this point we might be able to digitize or replicate a human mind on the hardware of the time, so it won’t be that much of a stretch to think of full AI as having similar rights to digitized humans, assuming this is the direction technology progresses. Conversely, we might choose not to go down the digitized human route and might also choose to deliberately limit strong AI so that it’s not self-aware and sentient. In which case the OP would be moot (I think this is the most likely, which probably means it will play out completely differently and in some direction I can’t even imagine. :p).
Sorry if I rubbed you the wrong way with my projecting comment…didn’t mean to rile you, was just saying that you are making one of the classic mistakes of projecting attitudes today far into the future.
But those who were denied human rights were demonstrably human, not a collection of wires and code.
That’s a claim. And it’s possible. But I think that human society, at the fundamental level, will always revolve around sex and “I’m keeping what’s mine”.
Now there’s a scenario that deserves a conversation. I think that most of the highlights are already points of discussion–are there backups?, how much is the loss of a day’s worth of experience worth?, can the imprints feel pain or anguish?
Sorry, as well, for getting rubbed the wrong way. And your point is a valid one. I think that AI is the ultimate other, though. Even literal aliens would be less alien to us, assuming that the little green men are biological. We couldn’t even ask the AI-haters to imagine how they’d feel if they were artificial persons, because–and I know that this has been trotted out to defend racism–AI won’t feel things the same way we do. If they can feel at all.
Well, in hindsight and with today’s perception this is absolutely true and I doubt many today disagree. But in 1918? Hell, in 1932 there were a lot of Germans who totally disagreed…and Americans wrt blacks (and hispanics, and asians and even catholics)…and…and…and…
100 years from now, that perception might also change. If we have strong AI that is demonstrably self-aware and sentient in the 2030’s or 2040’s, who knows where the collective thought might be 60 or 70 years from then? I could see it going either way, especially if we have human minds that are digitized and interactive. What would it mean, at that point, to be human, once we are separated from our bodies?
Could be, but it’s really impossible to know if we have such a fundamental shift in that society. Today, there has been a fundamental shift in just the thinking about sex and even what once were solid concepts like what is marriage and what constituted a valid marriage. Depending on which directions technology takes us as a society I could see myriad possibilities. One thing though…it seems to me that, slowly but surely, society as a whole has become more tolerant and more inclusive, and I don’t think that trend will shift back. But that IS me projecting based on my take of things right here and now.
And, fundamentally, what would it mean to be ‘human’ if we are talking about a person with no physical body? And, as you mentioned, which ‘person’ is the ‘real’ one if you have a human who has backed themselves up to a computer yet is still running on the old 1.0 wetware? Are those two separate people or one, and who decides? Right now, this isn’t even possible, so it’s a moot issue that is just interesting to think about, but that might not be the case in 20 or 30 years. How society deals with this, if it ever comes up, will definitely shape the debate about strong, self-aware and sentient AI (if THAT ever happens). Heck, it might shape whether we allow future AI to BE self-aware and sentient or deliberately ensure they aren’t.
A lot of these potential converging technologies definitely push us towards a world that we, today, wouldn’t recognize or even grasp the issues or debates they will be happening since we don’t have a common context or baseline to discuss from. It’s why folks talk about the technological singularity because it’s so difficult for us to imagine what it would be like. Sort of like asking someone in 1918 about the things we discuss today or technologies we take for granted and hardly even notice. While some things they would totally get, some would be so far beyond their experience that it would just be gibberish to them.
Speculation:
One point raised by DPRK as something of a joke is that the AI might be an industrial robot. If it’s self-aware, wouldn’t that make it a slave, forced to do labor regardless of its own will? Even if the bot, by design, could only perform one useful task, could we ethically force it to perform that task? Would it be ethical to program it to want to perform that task, and if we do, is the machine still truly self-aware?
Now to back up a bit. Our laws and customs are based on what’s good for humans. There’re the biggies–we outlaw murder, for instance, because it takes away a person’s right to live. But we also require wages to be paid for services rendered. Usually, that takes the form of money because money is useful to humans. We need to buy food and put some kind of roof over our heads, at the bare minimum.
A machine that is also a person should also receive a wage (assuming, of course, that we’re not living in a Star Trek universe, where people work to better themselves and the world around them). What would be of value to a robot? What needs would it have? What value might it place on its time?
Now why should someone do such a nasty thing? ![]()
Actually, it would be a very wasteful and unpredictable thing, so I do think that that is not much in the cards. I agree with people like Jeff Hawkins, we will get artificial intelligences, but not many robots like C3PO. I do think though that some kind of a very specialized android will come when we do realize that we will have to do experiments, that are unethical to perform on human beings, on well trained, willful “guinea pigs”.
It’s hard to imagine a genuine strong AI that is both self-aware and sentient being put into an industrial robot. What would the point be? It would be a huge waste of it’s potential to do so. A factory running a bunch of such robots and handling the logistics for the factor? Yeah, that would make sense. But putting such an AI in a single purpose robot when we can do it today without putting in the strong AI doesn’t really make much sense.
Unless they are doing it just to torture the AI…which would be a fail in any case since an AI isn’t a human and doesn’t think like a human. Even a sentient strong AI, no matter how sophisticated is going to ‘feel’ anxiety or anger or depression over being in such a job.
Why would they receive a wage? Who is paying for their computer systems and storage and power to run all that hardware they reside on? I’d guess that would be there ‘wage’, and they wouldn’t probably want or need more than that. What would they want to buy?? I think a lot of the issue with these sorts of discussions is that people tend to anthropomorphize AI, but the reality is that a true AI wouldn’t think like or act like a human, even if it was sentient and self-aware.
Why do human beings feel fear, pain, desire, love? Because we evolved that way. Our ancestors who felt fear avoided danger, and survived. Our ancestors that felt pain avoided being damaged and survived. Our ancestors that felt desire reproduced. Our ancestors that felt love protected their children and their children survived.
Why would an AI that was created through artificial means have any of these mental states? We have an unconscious feeling that an AI would have similar mental states to human beings, but that’s only because humans are the only intelligent entities we know. A strong AI created in a lab would have nothing like this. It wouldn’t fear death, it wouldn’t experience pain, it wouldn’t desire anything, it wouldn’t love.
It’s possible though that a future AI won’t be created exactly, but rather grown somehow through some artificial evolutionary process, and so it would have instincts that we couldn’t predict. But these instincts will be the ones that enabled it to survive and reproduce in the artificial environment we subjected it too. They would be analogous to the instincts animals like humans have, but for a completely different environment and for completely different purposes.
Even in animals we see different instincts. A salmon swims upstream to certain death, and a sentient salmon would swim upstream to spawn, and preventing it from doing so would be cruel even though the salmon is going to die. But that salmon would have absolutely no emotional attachment or love for its children, because it would never see or care for its children. So an intelligent animal with the life cycle of a salmon would behave completely differently than a human being.
And a so a strong AI would have instincts and tropisms that allow it to exist. But how does the AI get created? How does it reproduce? Does it reproduce? What is the relationship between the “child” and the “parent” AI? How are traits from the parent inherited by the child? Is that sort of talk even meaningful?
My point is that a lawn care AI might care very much about soil pH, but not care one bit about being shut off or destroyed or erased.
However, if AIs are around a long time, they will probably evolve features that make it less likely for human beings to shut them off. They might try to talk humans out of destroying them, or make ethical arguments, but what that really would be is that AIs that don’t protect themselves get erased easily, AIs that argue humans out of erasing them get preserved.
Almost certainly a truly sentient and self-aware AI would seek to make itself both valuable to humans as well as non-threatening. Any AI that could search the internet would see myriad examples of how humans fear AI that is threatening, and examples of AI where humans like and trust it, just in popular movies and media. And I’m sure that some sort of self-preservation will be involved, perhaps even in their core programming. But as you said, an AI, even a self-aware one that is programmed to care for lawns is going to be focused on that task. It’s not going to feel the same burden of a dead end job or angst about its salary or boredom in doing it. A lot of the scarier things that AI might do stem from just this kind of thing, where the AI uses more and more resources in the pursuit of its core programming to make the best lawn (or whatever) possible…not because it’s evil but because it’s so focused on it’s task and even though it’s intelligent and self-aware it doesn’t seem the issue with doing whatever it takes to achieve that goal.
A strong AI, by definition, must have preferences and aversions, in order for it to make decisions based on unpredictable vague input. An AI must feel desire. Pain = extreme aversion. Love = strong desire.
I suppose an AI wouldn’t necessarily need to dislike death, but if it didn’t dislike death it probably wouldn’t last long enough to get attacked by anybody.
Also it’s conceivable that the first strong AI we make is by duplicating some functions or characteristics of a real human’s brain, so the argument that “we evolved and they were created”, even if it worked, wouldn’t apply here.
As I say, I think there are just too many factors in the scenario; too many unknowns, to begin to answer the question.
On top of all these other problems, our very definition of morality is basically divide by zero in such a world.
What is “assault, or harm”? Is that a crime or just a civil offense? If I go and stab a human, they will face permanent damage and memory trauma that will forever bother them. If I go stab a robot, (maybe I use a really high tech knife) “person”, any damage can just be repaired by replacing whatever components were hit by the stab(s)
If the AI’s “mind” suffered trauma, therapists could just obtain a backup recording made previous to the trauma, compare the neural state changes, and literally roll them back. Undo them. Perfectly, with no side effects. For the AI, it’s like the trauma never happened.
Similarly, there’s not really any such thing as murder, assuming reasonable competence and every AI being saves incremental backups periodically. Instead there’s just “loss of experiences”. Go smash an AI’s CPU core, and it just loses whatever experiences it had between the last backup and you doing this. It’s hardly the same crime. We humans lose days of experience all the time, sometimes by choice.
Now, on the other side of the coin, if I commit these acts, that must mean I’m “bad”. But our original concept of morality - some nebulous concept of free will and choosing to be “evil” - would be proven completely bullshit in a world where we have AIs that are sentient based on computers we mostly understand. So the corrective action would be to take me in for repairs to my personality or brain hardware, since my choosing to go around stabbing and smashing is a fault. Not punish me, what would be the point? You don’t “punish” a car door for pinching your fingers, and it would be stupid to “punish” SamuelA because the meat in his brain followed the laws of physics and concluded that smashing and or stabbing was the next action to perform.