Artificial Intelligence

If there were really robots like in the movie ***Artificial Intelligence ** * they would have the same rights as Humans do and wouldn’t they be considered to be “alive” therefore making events such as the “Flesh Fair” depicted in the movie completely immoral and illegal.

There was a thread similar to this about 2 weeks back about if someone discovered higher intelligence in an animal, like, dogs - would they get the ability to vote, etc.

The gist is, when can a non-human species get rights, and what qualification is needed for those rights?

Your specific example of robots is sketchier, because robots can be built and programmed to vote for one party, or to break laws, with little to no consequences. So is morality a requisite for rights?

On the other hand, it could be argued that certain religious/political/social groups raise children by programming them with specific viewpoints…

I’ve not seen the movie.

We don’t even have an objective method of knowing whether an artificially intelligent entity is ‘alive’ in the same sense that we consider ourselves ‘alive’.

But in any case, I don’t think rights will be automatically granted; they weren’t automatically granted for blacks and women; they had to be fought for. If we actually manage to create proper(by which I mean entities that could objectively be described as ‘peers’) artificial intelligences to serve us, I think we could probably see our way to rationalising refusal of rights for a decade or three.

This raises a few questions:
Is an intelligent entity automatically alive? After all the robots were designed and manufactured by humans and don’t reproduce.
Then, which is the more important criterion for their rights, life or intelligence?
Finally, to which degree would we let the outer appearance impress us? Does a more conventional “box” design limit the rights of a being (think HAL-9000 from 2001)?

If we count sentient robots as living beings with civil rights, what is the point of building them? We already know how to make people.

That doesn’t mean it wouldn’t be an interesting project; I have already made two people (my children) and I don’t want any more human children, but if I could create a novel, artificial, sentient entity tomorrow, I would jump at the chance, because it would be incredibly interesting - sometimes this is reason enough.

I would argue we don’t know how to make people. We know how to have sex, and a new person is the result. We’ve learned a lot about what happens from the time sperm meets egg to birth, but we can’t replicate the process except to repeat the sexual act. We can’t synthesize a human being, say, from chemical components that aren’t organized already into cellular and multicellular life; we don’t know enough yet to make humans from scratch.

Really, we haven’t been able to come up with de novo sentience by any synthetic method, and aren’t even sure if we’d recognize it if we did some day. We have a hard enough time defining sentience with people, and what constitutes free will. Our best evidence for intelligence is whatever emergent phenomenon gave rise to this mind and others like it that ask questions such as “what is intelligence?” It’s not hard to see the troublesome potential for tautology here.

So far, it looks like were taking two approaches towards generating artificial intelligence: Mimicry of the human brain, or stochastic methods using a different substrate, like silicon, to produce other emergent phenomena that behave intelligently.

The first process, if sucessful, will synthesize human life. Of course, if we make a good enough copy of ourselves, I can’t see any reason why this new organism or mechanism shouldn’t be afforded rights.

The second method is sketchier. Maybe if we jumble things in a box until life pops out, we might be hoping it affords us some rights. Really, it’s tough to know how we should treat such a thing, because we’ll have to observe how it behaves before we can be confident things like human standards of ethics ought to apply. That might get us into trouble if we create something with the potential to destroy us, yet treat it compassionately simply because it’s intelligent. Maybe it won’t have any compassion, and is biding its time during the introductory phase, until it can destroy us. Maybe it simply wants to compete, and is a single-minded instrument of pure, amoral natural selection. Who knows? Perhaps if something like that arose (because we made it) in our midst, it would be immoral for us not to destroy it.

It’s a question as complex as the number of possible sentient beings we can make, it appears to me.

I’m thinking we could give them rights but I doubt they would want or even care for them.

I read an article once about AI which said that IF we could somehow manage to make an AI robot we could still programe it to have basic certain functions that it would never over-ride. Like, don’t kill the inferior humans. (which is what the premise of the article was about)

With that in mind, I’m pretty sure we would build said robots without the desire to want to do anything but serve us humans.

I think Robots will be granted sufferage say, oh, no more than 50 years after we finally recognize Dolphins’ Rights… :wink:

Dani

Sez who? Surely building another robot would be be quite likely within their capabilities.

Reason enough for you, maybe. Not reason enough for the corporations that would be funding the research. If robots are ever manufactured on an industrial scale, beyond the level of a laboratory project, it will be because they serve a valuable economic function – and it’s hard to see how they could serve such a function if they had recognized civil rights. One reason we make machines is that we do not have to regard them as end-in-themselves, with an independent claim to health and happiness; they are merely tools we can use for our own purposes. Once they stop being that, don’t they stop being valuable to us?

Asimov’s Three Laws of Robotics? I’ve always considered them a bit high-minded and unrealistic. If a robot is programmed never to harm a human being, that makes it useless as a soldier; and that’s one of the things the powers-that-fund are going to want it for. If a robot must obey a human being, any human being, that makes it ridiculously easy to steal; you don’t even need a password or a recognition code, you just walk up to a robot and say, “Come with me.” No, there’s never going to be but one overriding Law of Robotics: “A robot must obey its master.” Who is recognized as “master” will be a matter of programming.

I don’t think this will be easily possible (or any more easily possible than it is to do the same thing with human children).

Eh, I don’t know. One could argue that I’ve been programed with an embedded desire to NOT want to kill myself. Not that this isn’t true for ALL humans.

Haven’t read Asimovs’s Laws of robotics. The article I read was in a science mag. But anyway I think your making to big of a statement when you say that the powers that be are going to want this for “soldier” capabilities.(Not that I couldn’t see that happening) I can think of alot of companies that would love an army of AI robots. The reasons are pretty well implied; no paid vacations, no whining about pay raises, ect…

Also just because we program the robots not to kill us humans doesn’t necessarily mean that said robot has to do whatever any Joe Schmo tells them to. Not sure I follow your logic there.

Asimov’s Three Laws of Robotics:

  1. A robot may not harm a human being, or, through inaction, allow a human being to come to harm.

  2. A robot must obey a human being except when this conflicts with the First Law.

  3. A robot must protect its own existence except when this conflicts with the First or the Second Law.

In the 1940s and '50s, Isaac Asimov wrote a lot of stories set in a future where there are a large number of fully sentient robots with “positronic brains,” all programmed with these laws, which override all other programming, considerations or desires. You can read them in his classic collection I, Robot, which has been through several editions. Most of these stories had to do with how the Three Laws would work out in real-life circumstances, e.g., in situations where the First Law conflicted with the Second Law, etc. But he never did deal with the problem of a robot receiving an order from a human other than its owner; nor with the possibility that governments or businesses might actually want robots that kill.