In this thread: http://boards.straightdope.com/sdmb/showthread.php?s=&threadid=152362 we agreed, for the most part, that it would be immoral to breed a human being that was incapable of experiencing consciousness; it wouldn’t know that it existed and wouldn’t feel love or hate or envy or fear or any other emotions or desires. Most people would agree that it’s immoral to not provide the potential (and deserved) consciousness of a human.
Now, assume that we know, in the future, what consciousness is and how to properly create a conscious robot (not one that just did what it was programmed, but grew and learned and thought on its own). If one were to build a conscious robot, and then build a second robot exactly like the first, but stopped one step short of ‘turning on’ it’s consciousness, would we be depriving the second robot of it’s deserved consciousness? After all, it would be immoral to do the same to a human, so wouldn’t it be immoral to do the same to a robot that has the potential for consciousness?
Where does one draw the line between a conscious robot and a conscious human?
I do not think that the assumption that denying a potentially conscious robot its consciousness is immoral follows from the first two threads. It is immoral to deny humans consciousness, because as humans, we recognize that consciousness is an intrinsic part of our being. We have a responsibility as humans to respect this innate self-awareness because we realize how much of our existence hinges on this basic knowledge of ourselves.
Following this, if we were to come across some other ‘being’ (robot or what have you), we would respect their consciousness as we respect our own. We would not shut off their self-awareness simply because we can. We recognize that we share the common bond of consciousness and would respect these robots’ consciousness as similar to that which is intrinsic in ourselves.
However, robots in themselves do not have consciousness as a prerequisite of their existence as humans do. Even with the extremely slight possibility that one day there might be a conscious robot, this fact would not prevent me from turning on and off my vacuum cleaner at will. A robot is simply a machine which we tell what to do; the fact that we (maybe) will one day be able to tell one to be aware of its own nature does not mean that we are bound by the essence of consciousness to bestow it on all that we possibly can. We are bound to preserving it in humans because we understand that is an undeniable fact of humanity, one which constitutes our existence. We have no responsibility to artificially bestow this fact of self-awareness on all our creations simply because we can.
1150
i don’t think from your previous two threads that one must conclude that it would be immoral to breed unconscious humans to work in factories. i think it in order to consider causing pain immoral, the object receiving the pain must be able to feel the pain. or at least respond in a manner which indicates that they feel it. for example, mowing your lawn is not immoral.
also, the fact that we will know what consciousness is when this comes about does not mean that we can use that knowledge now to make moral judgements.
it seems to me that most of those who consider it immoral to create such factory workers do so because they feel it makes humans seem somehow lessened. consciousness is currently a requirement for being human, and being afforded all the rights and responsibilities that come along with it. to call unconscious beings human may, to some, seem to lower the value of being human. to me, the only reason brought up in the previous thread that explained why we wouldn’t want unconscious humans to work in factories is that we gain nothing from it; in fact we lose.
i think that a robot would be able to see how it was different from the other unconscious robot, so it would not be an offense to that robot to not make the second robot conscious. to consider it immoral is sort of like calling it immoral to deny the world a child by not having sex.
I’m not convinced that this was the case. I think that those who thought it was immoral were in the minority, and in any case tended to deviate from the conditions imposed in the OP.
I don’t follow your line of reasoning here. The people who thought it was immoral to create an unconscious human held that view because they believed human life to be ‘sacred’, for want of a better word. A robot is not a human.
Omitting to activate the potential consciousness of a robot is not wrong in my view. It would be different if its existing consciousness was deactivated (as in your first thread), but I’m inclined to agree with Ramanujan, it’s no more immoral than deciding not to reproduce.
Hmm… lets change the question slightly then, shall we? Let’s forget about the whole bit of building a robot but stopping one step short of creating their consciousness.
Let’s pretend that:
A) We know what consciousness is.
B) We have the ability to build robots that are conscious.
C) We have the ability to breed humans that are not conscious
Since we would have the ability to play around with consciousness in both humans and robots, what makes the consciousness of a human different than the consciousness of a robot? What makes a human inherently a human and what makes a conscious robot not a human?