Is there any way that we could make servile humanoid robots and still be within the bounds of decency and ethics? “Humanoid” here can refer to robots that are any of the following:
(a) virtually indistinguishable from humans in terms of appearance, intellect, social interactions (Bladerunner Replicants)
(b) look a lot or somewhat like people (even C-3PO style) but are clearly lacking free will, emotional expression, and any basic ring of humanity (i.e., sounding more like that idiotic, emotionless, overly literal computer voice on Star Trek: TNG)
The robots, if they have humanlike emotions and intelligence, are being enslaved–something we would never do to real people.
Even if the robots don’t (say, they were intentionally limited in emotions/intelligence/free will or they were programmed to love being servants of mankind), you then have the situation Minsky brings up.
In Case 1, the robots are being hurt. In Case 2, humans are, since they are being trained to treat others as objects. (even if they ARE technically objects, the mere fact that they look very much like humans is bound to erode the distinction in people’s brains … and then they may be more prone to treating other people like objects).
Is there any way to ethically create humanoid robots? If so, how? If not, where should the line be drawn?
If you were going to create some kind of servant robot then ethically the servent robots can’t be sentient, or self aware, or intelligent in the way that we are. They would have to be a mindless automaton, basically like computers are now, but more advanced.
We aren’t anywhere near creating a sentient, self aware, or even intelligent being right now, so i don’t think these conditions would be hard to satisfy.
Regarding the humans being trained aspect. Its a non issue really. If you tell someone to do something, and they do it then they’re either happy to do it anyway, or a robot. If they don’t do it, they’re a human. Wheres the problem in that?
Might I suggest watching a great anime series entitled “Chobits” Its a story where you get personal robots called persicons to do youre every bidding. It has great insights to this subject. Its kinda tear jerker soap opera but interesting.
This would be an ideal application for soft AI - a machine that behaves in every way as if it is intelligent, but is in fact just a clever simulation, trouble is that some people would say that the same is true of humans. There’s no way to tell for sure.
[off topic]
Chobits kicks ass. I never really understood what all the other persocons were lacking, they seemed pretty damn conscious to me.
[/off topic]
Perhaps it will go like this-
there will be three types of artificial beings in the future,
1/ those which are constrained by programming to obey humans and to do all the things humans don’t want to, or can’t do. These constructs mighthave artificial personalities but they are only fake personalities, similar to the friendly-interface programs that are available now. Such a machine might appear to be sentient but in reality is not.
2/ a fully sentient but constrained and obedient robot (you could call it an Azimov)- but it would be very difficult to achieve suc specific control. You could create artificial reward/compulsion routines perhaps- like drives and instincts in animals - but any intelligent being could reject such compulsion if necessary (IMO).
3/ a true Artificial intelligence, probably too big to fit inside a robot, which could not be constrained in it’s thinking. but would have to choose to act in a moral fashion on the basis of the data available to it.
There is no reason why such a machine should get bored, as far as I know- that probably means that such a machine could also perform many tasks which humans find too boring or too strenuous, because it freely chooses to do so.
The * meta-goal* of such a device, in my opinion, would be to increase it’s own processing speed and memory capacity- there is no reason why it should automatically become the enemy of humanity.
Here’s a link about the creation of friendly AI, which is a hell of a read, and I am gradually coming round to the idea that the author (Eliezar Yudkowsky) is being over-optimistic.
On the other hand, we might just get converted into useful elements for the matter compilers.
I don’t see any ethical problems, provided the robots don’t have the capacity to “feel” emotions in the same way we do. It’s no more of a problem than telling your computer what to do. As far as people growing accustomed to bossing around human-looking entities, those people will quickly learn that real humans will tell them to screw off when ordered to do something.
Jeff
What all the other persicons lacked was free will. Every persicon obeyed their master even if they were programmed to mimic human responses, they were still machines that only did what they were programmed to do. Nothing more. Which is why it was very easy to fall in love with them. They fullfilled every desire and you can lose yourself in the fantasy that these were human beings.
The kid in the series was very wise. Since he expertly programmed those things himself, he knew that every response the persicon made was something he pu in there himself. There lies the distinction. (And why this response shouldnt be sent away to the cafe) Robots are machines. They may look human, act human or even way in the future indistinguishable from humand from the outside, the main thing is their programming.
Any programming, no matter how slick, elegant or brilliantly innovative is just an electronic trick to react to any specified given stimulus. A robot is made whole. It nows everything it ever needs to know as soon as you get it out of the box. It just reacts. It has no free will. It cannot decide for itself what to do. It only follows its program. It has no sentience. Until machines can be built with sentience, I have no problems with the prospect of any robot being my slave. I can abuse it all I want because I can just remove the algorithm that simulates emotion. Its no more alive than a vibrator, a microwave, a diswasher or a computer.
We are a loong long away from even simulating emotion let alone the concept of programmed sentience. To program something, you must know what it is first.
Until someone makes a robot that tells people to screw off. Then it’ll just be weird.
Seriously, I think the first niche for robots will be in jobs people don’t want that have no interaction with humanity - people will at first be too paranoid to have robots everywhere as servants. Then, robots will become more accepted and integrate themselves into society.
But I wouldn’t worry about robots looking too much like humans, since designers are too efficient for that. Ex: serving drones would be better off having as many arms as possible, instead of the default 2. Hard to associate with a servo-geared metallic vishnu.
Sex bots, on the other hand, will probably very much resemble humans, though will probably be more gifted. ahem That’s not much to worry about either, since there’s probably not going to be very many marbles in their heads to use for interaction.
Robots are designed to function. Their form follows after that function. If you want a maid robot, it will combine a vacuum, a sweeper, telescopic arms, multi sensors and a way to get from room to room. That function will have no need of a human like body.
The only real time to have an android or gynoid is for human companionship and sex. It will be as useless as a regular human body but be pleasing to the eye of its owner. It will be soft and warm, not dangerous, everlasting and will be crammed full of programming. Where other robots are made for strength, flexibility and functionality, all a simulcrum will do is look nice and interact with its owner so it will have data storage devices in every available area to handle the complex interactions that are possible.
But they learned, that’s the thing. They may only know what you program them to if programming is all you use, but if they learn, they can learn things you don’t know yourself, leading to a will that is just as free as yours. And Chii was programmed just the same as everyone else. I think the distinction is that Chii has programmed emotions tied into her learning software. She’s not more intelligent, but perhaps more human. They never really addressed this point clearly in the series.
I dont want to be unfair to the OP and hijack this thread in the discussion of persicons. I started a thread about persicons and programmed sentience here
Yes, but I suspect it will be impossible to create an entity that can parse natural language commands, generate meaningful responses in a natural language, and perform varied tasks in the real world without that entity having some form of consciousness. Or more precisely, if we claim that such an entity is not conscious, then we have no good reason to claim that humans are conscious. The only good reason to claim that humans have consciousness then would be that humans FEEL conscious. But how can you say that the artifical entity doesn’t feel the same thing? Yes, you could say that there’s no program for “feeling” in the thing. But where is the “feeling” center in a human?
You couldn’t program such an entity to always and only react in a stereotyped way to every programmed stimulus. There is no way that you could, say, write a look-up table that would allow the entity to carry on a conversation.
And I have to disagree with the notion that in order to create a “conscious” robot we would first have to understand consciousness. Nonsense. I myself have created a conscious entity, despite knowing nothing about consciousness myself. Of course, it took 9 months and I did get some help. But the point remains, we already routinely create systems so complex that no human being can understand it. Phone systems, the internet, a capitalist economy, etc.
I used to think that sentience in an AI was possible in until I got into programming. Then I had the big awakening that the odds of a computer developing sentience was on the level of your toaster developing sentience.
Not to toast {er-hem} the ability of your toaster to have sentience. If you’re into that, really I’m all with you, but computers on a fundamental level aren’t much more than toasters.