Would it be immoral to breed insentient factory workers? (Consciousness - Part II)

They are “human” in the sense that they have human DNA. But why does simply having human DNA endow them with rights? My index finger has human DNA, but if I cut it off, it doesn’t become its own person.

On the other hand, they aren’t “human” in the sense of having emotions and desires; they would essentially be robots made of meat. And robots are already used in factories across the world.

That’s like saying “if it’s OK to hire amputees, would it be OK to cut your employees’ legs off?” Lobotomizing people (presumably against their will) violates their rights, but it’s questionable whether creatures without a consciousness have rights at all.

I don’t think it would be immoral. I do think that they would be very inflexible, and it would be very tempting to give them some intelligence to solve simple tasks … which leads on a slippery slope to consciousness.

The difference between your whole being and just a portion of your being (your index finger) should be obvious, if you cut it off it is just tissue. I did not state that it is the presence of human DNA that gives rights. We’d be in prison for murder every time we trimmed our toenails wouldn’t we?

** On the other hand, they aren’t “human” in the sense of having emotions and desires; they would essentially be robots made of meat. And robots are already used in factories across the world. **
[/QUOTE]

Then neither are some downs syndrome children or severly mentally handicapped “human”. Autistic people can have many attributes of the hypothetical “meat robot” you speak of. And lets not forget that for the idea to work, the ‘drone’ would have to be able to somehow take instruction, would have to be able to process information in order to accomplish the tasks it was created for, so some kind of brain function is needed. With that come emotions and desires, if just the desire to follow orders! For reward or food or electrical stimulus to the pain/pleasure centers. No matter how simple from your perspective it’s brain activity it is still one human enslaving another. Few would be in favor of enslaving retarded men and women for labor because of the moral repugnance of that would be widespread. Plus, without higher brain functions you’d essentially be talking about trying to teach manufacturing skills to people in a coma. Ever work with the mentally handicapped?

** That’s like saying “if it’s OK to hire amputees, would it be OK to cut your employees’ legs off?” Lobotomizing people (presumably against their will) violates their rights, but it’s questionable whether creatures without a consciousness have rights at all. **
[/QUOTE]

What is the difference? If the state takes egg cells and sperm cells puts them together, that’s a human life to many people (that’s a Great Debate isn’t it?). So then the state modifies the gene sequence to inhibit the growth of the brain parts that make a worker aware and selfish and emotional and thoughtful and all that. How can there be a real moral difference between that and waiting until the baby is grown to remove the higher brain functions? I suppose that’s the central point as I see it. Does a human without brain function have rights? The state of Florida seems to think so , at least as far as making sure that there is proper legal representaion for all concerned, including the human (not “creature”) who has no consciousness.

I’m not equating immigrants with ‘unconscious humans’. What I mean is that because our society allows workers to be deprived of jobs either because of mechanisation or immigration, it will probably accept job losses this way too.

I think that to the employers this would be an enormous advantage.

This is irrelevant, the OP specified what the “meat robot” was, and a mentally handicapped person is not one.

Clearly. But it should be thought of more like a computer, which can also process information but is presumably not conscious. It may be that consciousness arises simply out of information processing capacity, we don’t know. However the OP specifies that we do know what consciousness is and these creatures do not have it.

That does not make it ethical. Slavery was a boon to the economy as well.

Slavery is different because the slaves themselves are conscious human beings that have been forced into becoming slaves. Virtually everybody will agree that their rights to freedom take precedence over the employers’ desire to cut costs.

However, the right to paid employment does not take precedence over the demands of the economy. Businesses will always seek to cut labour costs. Two ways they can do this now are, as I mentioned, mechanisation and use of (cheaper) immigrant workers. Both have a negative impact from the point of view of existing workers that are made redundant, or must work for less money, but are still legal and generally accepted.

For this reason, should ‘unconscious humans’ ever come to exist, I don’t think that the plight of the people that they may put out of work will be a major moral consideration.

I’m surprised that no one has mentioned Brave New World. It dealt with the subject of “manufacturing” people by determining exactly how they would be born, brought up and the type of adult life they would lead.

Actually, I briefly mentioned Brave New World at the end of my OP. :slight_smile:

OK, cart, let me use a better analogy, such as infants born with anencephaly:

These infants would seem to meet your requirement for lack on consciousness, yet the American Academy of Pediatrics has concluded that they are no less deserving of the same ethical consideration as normal infants with regards to organ donation. The infant must be declared brain dead, and consent received from parents or relatives before transplantation can occur:

Clearly, even though they have never achieved consiousness, it is the opinion of the medical community that they cannot be exploited. The fact that they are incapable of performing productive work as your sub-conscious automatons would, only underscores the ethical conlusion that even a an infant without any hope of a life must still be regarded as human, with all attendant rights.

Not my requirement, I didn’t start the thread. I take it that you are now objecting to the entire principle rather than specifically employment issues.

Anencephaly makes an interesting comparison, but I don’t think that it’s the same thing.

For a start, we don’t yet have a means of determining whether or not consciousness exists, something that the OP stipulates. Note that the description of the disorder says that the infants are “usually” unconscious.

Not quite. It’s only the opinion of the American medical community:

In any case, it appears to be considered a legal rather than an ethical issue. From the same document, it is quite clear that even American doctors are seeking a legal method to get their hands on the organs of anencephalic infants.

They are fatdave’s automatons, not mine. But I think that to discuss anencephaly is to get off topic. I don’t think the manufacture of consciousless humans, as described in the OP, is unethical, and I don’t think anencephaly has any bearing on the matter.

Apparently you don’t wish to debate the issue. I have presented two viable analogies to support my contention that the OP is unethical, and your response is to simply dismiss them as unrelated. If you want to continue this discourse, I will leave it up to you to promote some analogies to support your position. Until then, I guess we simply disagree, and I will leave it at that.

I’m more than happy to debate the issue, but that doesn’t mean I necessarily have to accept the validity of any particular analogy.

My position on this is straightforward, by definition these modified humans are not self aware, are incapable of suffering and consequently I can’t see how any harm is being done.

Like Mr2001, I do not believe that merely having human DNA automatically confers ‘human rights’. The example he gives is a disembodied finger.

I’m reluctant to discuss anencephaly because I am not a doctor. But it is clear that American doctors’ views on the condition do not prove this particular issue one way or another. It’s not clear that anencephaly is equivalent to the conditions specified in the OP, furthermore opinion is divided on whether it is ethical to harvest the organs of an anencephalic child.

Although you haven’t stated this directly, I suspect that the difference between us here is that you, like lucwarm, consider the human body to be “sacred”. Whereas to me, if there is no consciousness, it’s just a meat computer.

And you would be wrong. I am an athiest and believe in a woman’s right to choose abortion.

The problem with your position is you have not defined “consciousness”, which is your dividing line between beings with human rights, and “meat computers”. Your premise posits a being that can somehow perform work in a factory without consciousness. The two are mutually exclusive. It sounds to me like you are talking less about “consciousness” than what the spriritually inclined call the “soul”, which is equally ill-defined. Please explain how an unconscious being can perform work.

Well, the OP presented the assumption that we know what consciousness is, and that it would be possible for these modified humans to do work without consciousness. That ought to be enough.

But wouldn’t you say a robot in a factory is capable of doing work? Whether it’s lifting a part with a clamp and moving it across the room, or visually inspecting a part with a camera for quality control, there are plenty of tasks that can be done by machines and computers. I’ve been assuming these “meat robots” would be used for that sort of task.

Look, I don’t think that non-conscious people could actually exist. What exactly is consciousness? Let’s define it as an awareness of one’s interior state. A sentient creature first models its environment. Social organisms model other members of their species, to be able to predict the actions of other members of the group. If I take the Alpha Baboon’s fruit, he’ll kick the crap out of me, so I won’t steal it. If a social animal is able to model the social behavior of other members of his species, it is a short step to be able to model his OWN social behavior. We can call that consciousness.

I just can’t invision a biological entity smart enough to learn how to do factory tasks that wouldn’t also have the capacity to learn how to model the internal states of the people ordering it to perform those factory tasks. The supposed advantage of using intelligent sub-humans would be that they are capable of more learning than non-human animals. If these human-like creatures are literally robots that have no thinking at all, exactly how would they be useful in a factory? I suppose we could hook up a severed human arm, feed it with tubes, and control it with electrodes attached to the muscles. But aren’t there easier ways to do those things?

Anyway, the OP posits that we would know what consciousness is, and therefore would be able to verify that our sub-human slaves weren’t conscious. But if we take that assumption, we won’t be able to know whether it would be immoral until we know the exact nature of consciousness. I imagine that the question of the morality of non-conscious sub-human slaves is moot, since such a creature is impossible, or, if it is not sctrictly impossible then such a creature could not do meaningful work that couldn’t more easily and cheaply be done by machines or conscious humans.

I didn’t mean “sacred” in an exclusively religious sense. I’m sure that many atheists are opposed in principle to genetic manipulation of the human body. And the mention of abortion surprises me, given that you apparently think that it’s wrong to harvest the organs of anencephalic children.

I’ve been trying to infer your position without success and the only thing that I’ve learned is that you are against, for unspecified reasons.

I think it’s a problem with everyone’s position, we are probably all using a different definition.

In this context, I have been taking “consciousness” to mean self-awareness. The OP doesn’t mention intelligence, so I am assuming that they could be as intelligent as ordinary humans. Just not self-aware.

I think that is the only definition for which the OP makes sense. There isn’t any point manufacturing humans with limited intelligence, we already have robots of limited intelligence.

Yes, possibly some people would define “soul” the same way as I have defined “consciousness” in this context.

I agree, this is what I have taken it to mean here.

But couldn’t a robot with sufficiently sophisticated software exhibit this kind of behaviour without self awareness (ie without consciousness) ? That is what I imagine the creatures specified in the OP to be like.

Pardon the hi-jack, but I have noticed the differentiation of existing citizens from legal immigrants. Tell me, what is the difference between a citizen whose parents (or grandparents, or great grandparents) were legal immigrants, and those who are legal immigrants themselves? Do these legal immigrants not have the same rights to opportunity as existing citizens?

Back on topic, if it were possible to create a species of sub-humans that we know for sure have no consciusness, I do not see any ethical violations. This is simply a meat-robot, as previously mentioned.

This is not about the difference between consciousness, unconsciousness, creating meat robots or turning off worlds of virtual AI beings. The key word to both posts is “Morality”.

Are you Moral or Immoral? Does not the Pope have a very different opinion about Moral Sense than did Adolf Hitler? Do you think Hitler would have a problem with cloning mindless automotons to work in bomb factories if he were still around. Probably wouldn’t think twice of it. Would the Pope shut off a virtual AI program if he saw it thanking God for it’s exisitance? Probably not.

The question here is not about it being possible or impossible. The question was, is it moral? It is a personal question with personal preferances involved. If I had to vote on the issue tommorow, my preferance would be against cloning mindless meat robots and against turning off a virtual AI. It would be the “North” kind of thing to do.

Are you the Earl or the Andaconda? - (Mark Twain’s essay on ‘The lowest Animal’ in Letters from the Earth)

and a person without consciousness is just tissue. what is the difference?

does the thermostat desire the room to be cooler when it stops the heat?

this is perfectly contradictory to the first quote i attributed to you here. it ought to be obvious that a fertilized egg is not a human, but rather a cell with the potential to grow into a human. each cell in my body is alive, that alone does not make any of them sacred. my sperm or egg cells are alive and have the potential to be human. they’re still not sacred.

the difference is, when the baby is grown, you are taking away something that it already has. a conscious baby has the right to maintain its consciousness, by virtue of its humanness. an unconscious creature cannot be said to be human, lacking one of the most essential attributes of humanness, and therefore has no such right. the baby in its conscious state is already valuable to someone (mother, father, et al), so saying that the baby wouldn’t know it once had consciousness after it was removed is a moot point; others will know and care.

mentally handicapped people still have consciousness. they have enough human attributes that we still call them human. also, the lack of intentionality in their production causes sympathy toward their plight. that sympathy would be bypassed by intending the offspring to be unconscious.

as for a person in a coma, he would generally have had a life before he went into the coma, and have people around who remember what he was. so he is also of value to others. also, there are no guarantees that the coma is going to be permanant. so there is no basis for comparison there, either.

Not to nitpick myself, but I thought the English sentient came from the Latin sentio, sentire, which can mean to perceive or to feel, as well as to understand or to believe. It is a verb applied to emotions and sensations, facts and opinions. Unless something was lost as the word base was adopted into English, sentient would mean both having the use of senses and intelligent, aware, etc..

Personally, I would approach the question by looking at the economy. Would it be moral to take jobs away from blue collar workers in favor of cheap slave labor? I know this is a standard objection to using robots in factories, but the economy is already suffering from the loss of manufacturing jobs, which forces many former factory workers into demeaning, lower paying service industry jobs. It’s possible that breeding drones for factory labor would increase profits for a few, while harming the economic situation for many. If so, that’s unethical.

The economy would require some major restructuring to reach a point where drones would be beneficial. Slavery may have been good for the economy, but that was before the Industrial Revolution. We’re not so agrarian anymore.

But that’s more about ethics than morality, right?

I believe the moral kneejerk reaction to the idea of breeding drones comes from the story everyone’s thinking about, but not really writing… about Plucky, the bright little drone bred for slave labor, told all her life she didn’t have consciousness, but one fateful day she happens across some inspirational little message, and decides she has a soul. Even if she really doesn’t have a soul, she believes she does, and she makes heartbreaking appeals to her evil drone slavemasters to let her go free. Or maybe two drones fall in love. Spielburg directs.

I know, I’m supposed to argue from the point where we know the drones don’t have consciousness, but that’s difficult.

Finally, even if it were possible–factory labor requires a great deal of skill, precision, and prudent decision-making. The drones would have to be bred with grace, strength, analytical skills, and the ability to learn. There would be several batches of useless drones before the recipe is perfected, and I’d expect them to be fairly expensive and the end of that. Then, after the initial financial burden, they would have to be provided necessities like food, a resting place, exercise, medical attention, and hygene. Wouldn’t it be cheaper to just use robots?