When computers surpass human intelligence and become sentient, will they be given the same legal rights as people?
Oops, I meant to post this in General Questions… feel free to move it.
Let’s don’t move it there. GQ is for questions with factual answers. GD or IMHO sound like possible forums for this, in my lowly opinion.
I doubt it, but what worries me is that they will get the same rights as corporations.
And the same limited liability!
A robot will be able to buy a car and drive, but will not be jailed for uninsured DWI accidents.
Of course, if Futurama is any indication, robots will be required to DWI.
I imagine so, eventually. I think that the definition of human will have to expand to encompass human-machine hybrids, and from there it’s only a small step to granting ‘human’ rights to pure machines. As the population ages, there will be more and more research in the direction of prolonging life. Eventually someone will devise a device that can attach to a body and preserve the human consciousness even as necessary biological systems fail in the body. This could happen sooner than most people think, possibly within the next twenty years. As neuroscientists are starting to have some luck decoding which neural signals correspond to which thoughts, it’s only a matter of time before we build machines that can interact with those signals.
I don’t think anyone would reject human rights for a copy of a human consciousness. On a political level, rights for pure machines might still be problematic. I imagine that machine rights would the province of minor left-wing political groups for a generation or so before the idea went mainstream.
What, like an iron lung? a pacemaker? a drinking straw? We’ve had technology for a long time that can attach to a body and help people stay alive after necessary biological functions have failed; what are you envisioning that would be different?
In none of these cases do we assign rights directly to the machines, though. Even in modern-day cyborgs like Dick Cheney or Steven Hawking, we don’t assign their machines any rights: destroying the machine might count as murder, but you’d be murdering Cheney or Hawking, not their machine, and if they survived, there would be no murder charge.
If someone figures out how to transfer human thought patterns into a machine such that the machine can think similarly to its creator, at least well enough to pass the Turing test, then I can see folks attributing rights to the machine. Not until.
If you mean a “brain in a jar” type of situation, 20 years is conceivable, but probably too optimistic. But that wouldn’t really be anything new; we already have people with prosthetics, and they aren’t treated as any less human under the law, so a total-body prosthetic wouldn’t either. If, though, you mean transfer of consciousness from a brain to a non-brain thing (or even into another brain), we’re talking more like thousands of years. For that, we’d need to understand how human consciousness works, and on that score, we’re only barely beyond the philosophers of centuries ago. We’re probably much more likely to develop a pure machine consciousness first, since that would only require that we stumble upon some mechanism or another for consciousness, not necessarily the particular one that makes humans tick.
To clarify, I’m envisioning a machine that steps in when brain function fails. As in, if a stroke cuts off blood flow to part of the brain, we’ll put a box on the person’s head that can supply the necessary brain function so that the person’s consciousness continues uninterrupted.
I imagine that such technology would develop in stages. First we might focus on motor control. If a stroke cuts off control of one leg, for instance, we attach a mechanical contraption that interacts with the necessary neurons to allow the patient to move that leg normally. Then we’d expand to purely mental tasks. For instance, we could give Alzheimer’s patients an artifical memory. Eventually we’d advance to the point where the brain systems that constitute self-awareness could be moved to a machine.
Three stages, in order:
- The mechanical contraption that interacts with the necessary neurons to give movement is here already, I think, in prototype forms. I see no reason, though, to confer to it any more rights than we confer to a bicycle, which also gives us movement through a more mediated interaction with neurons.
- The mechanical contraption that houses memory, assuming you mean a complex, human-style memory, seems a pretty long way away: our understanding of how memory physically works is far more basic than our understanding of how motor neurons function. When such a device exists, I see no reason to give it any more rights than we give to my dad’s Palm Pilot, which he sometimes calls his “brain.”
- Eventually we may move to transfer the systems that constitute self-awareness to a machine. Right now we have only the most basic and speculative of theories regarding what those systems are like; we certainly don’t have a handle on how they translate into gray matter (e.g., no scientist can reliably tell you what a desire looks like). That is the point where I can see conferring rights to a machine, but that’s the point that’s a long ways away.
Stapling machine, Mrs Worrell.
I think the whole Boomer/Athena angle on BSG is covering this remarkably well, at what point do you stop considering the Meatbag Cylons “toasters”, when do they become “people”
i’d hazard a gues that it’s already happened in the BSG universe, the Meatbags just don’t fully realize it yet, that they have become what they hate, Humans
If the wealthy elite can convince everybody it’s all right to grant legal rights to fictional entities like corporations, I’m sure doing the same for certain classes of machines will be a cakewalk.