Put a value on a human life

Sentience in robots is hard to truly program in, and not terribly valuable and probably negative value to the owner of the robot.

Watson could beat the Jeopardy champs, but is not sentient. We could presumably build a Dr. Robot which would outcompete all human doctors with its knowledge and skill, then clone Dr. Robot so there were many Dr. Robots. All without sentience. Dr. Robot isn’t going to care when you turn it off. Should it?

I like my sentience. Other sentient beings may be competitors. Highly skilled, intelligent, non-sentient robots aren’t competitors, they can be owned by me and used by me. So they have an advantage in usefulness, TO ME, over other sentient beings with their own interests.

A sentient being who is in danger of not being in the ownership class of sentient beings may have an interest in “converting” the intelligent robots to sentience so as to make them less effective servants. I suppose if they could create a bug or virus that would do so for the robots, maybe they would. It’s not something that is going to be useful to the owners of the robots and the owners will be trying to prevent sentience.