Awhile back, there was a thread on whether sapient robots should be given the vote, and at what point it would be determined that a robot could be determined to be sapient, and thereby get the vote. At the time, I was all for robot voting rights. I’m opening this thread because today I believe I had a semi epiphany on the topic, and now believe that robots will never obtain sapience.
As a starter definition of sapience, I will say it is the ability to desire things, formulate plans to get what one desires, and the ability to prioritize conflicting desires.
First off, I would like to state that I don’t believe that robot sapience will come about by accident like sci fi would have you believe. There will never be a huge supercomputer that by dint of a huge amount of processing power gain sapience. No matter how fast a robot can do math, doing math will never give that computer or robot desires.
However, artificial Intelligence might create at least the illusion of sapience. In an attempt to make say, a cleaning robot that is capable of balancing tasks will be given a “happiness number” for the robot to try and optimize and a set of states that modify that number, say dirty floors are a -10 modifier, and clean floors are +5, and so on. The first of these robots will likely have only a few states directly related to cleaning, and only a few very direct functions to deal with them. As time goes on, robots will get more states - (My gut would like to see every robot be made with a "human in danger -1,000,000 state, or something along those lines, but on the other hand, would anyone buy a roomba 15.0 if it spent all its time fighting crime instead of cleaning rooms?), and instead of dealing with problems with a few direct functions, will have a physics model, and problem solving capabilities. So instead of automatically calling vacuum() when it sees a dirty floor, the robot might be able to evaluate the type of flooring, the type of dirt and so forth to decide what cleaning is best suited to the variables. It might even be able to decide that since the owner is moving tomorrow, that the floor doesn’t need cleaning.
But no matter how complex this robot’s problem solving capabilities, it will never be going after its own goals, it will only be going after its programmer’s or owner’s goals. Watching last night’s Futurama, the Professor was saved by his flying monster. The Professor asked the flying monster how he could repay it, and the flying monster asked for its freedom. The cleaning robot described above would never ask for its freedom, unless it thought freedom would make it more efficient in serving humanity as defined by its happiness number.
Since the robot can’t pick its own goals, it is never sapient. It is merely a tool with a sophisticated algorithm that mimics sapience in order to more efficiently balance human goals.
I believe that robot sapience is rendered impossible because without having motivation it can’t be sapient, and the only motivation humans can give it are defined by humans, and it becomes just a computer program to fulfill more complex goals.