Robot or Computer Sapience

Awhile back, there was a thread on whether sapient robots should be given the vote, and at what point it would be determined that a robot could be determined to be sapient, and thereby get the vote. At the time, I was all for robot voting rights. I’m opening this thread because today I believe I had a semi epiphany on the topic, and now believe that robots will never obtain sapience.

As a starter definition of sapience, I will say it is the ability to desire things, formulate plans to get what one desires, and the ability to prioritize conflicting desires.

First off, I would like to state that I don’t believe that robot sapience will come about by accident like sci fi would have you believe. There will never be a huge supercomputer that by dint of a huge amount of processing power gain sapience. No matter how fast a robot can do math, doing math will never give that computer or robot desires.
However, artificial Intelligence might create at least the illusion of sapience. In an attempt to make say, a cleaning robot that is capable of balancing tasks will be given a “happiness number” for the robot to try and optimize and a set of states that modify that number, say dirty floors are a -10 modifier, and clean floors are +5, and so on. The first of these robots will likely have only a few states directly related to cleaning, and only a few very direct functions to deal with them. As time goes on, robots will get more states - (My gut would like to see every robot be made with a "human in danger -1,000,000 state, or something along those lines, but on the other hand, would anyone buy a roomba 15.0 if it spent all its time fighting crime instead of cleaning rooms?), and instead of dealing with problems with a few direct functions, will have a physics model, and problem solving capabilities. So instead of automatically calling vacuum() when it sees a dirty floor, the robot might be able to evaluate the type of flooring, the type of dirt and so forth to decide what cleaning is best suited to the variables. It might even be able to decide that since the owner is moving tomorrow, that the floor doesn’t need cleaning.

But no matter how complex this robot’s problem solving capabilities, it will never be going after its own goals, it will only be going after its programmer’s or owner’s goals. Watching last night’s Futurama, the Professor was saved by his flying monster. The Professor asked the flying monster how he could repay it, and the flying monster asked for its freedom. The cleaning robot described above would never ask for its freedom, unless it thought freedom would make it more efficient in serving humanity as defined by its happiness number.

Since the robot can’t pick its own goals, it is never sapient. It is merely a tool with a sophisticated algorithm that mimics sapience in order to more efficiently balance human goals.

I believe that robot sapience is rendered impossible because without having motivation it can’t be sapient, and the only motivation humans can give it are defined by humans, and it becomes just a computer program to fulfill more complex goals.

What makes you think that a human’s goals appear ex nihilo either ? Instinct, early experiences, something we are taught or told; all our drives and desires come from somewhere. And a robot/program that could be considered a person would have to have a lot of self programming/program altering capability, or no one would argue that it’s anywhere near a person.

You define an extremly rigid machine, that would either not act sentient or act like a heavily mind controlled person. Hardwire a rigid definition of emotions and behaviors into me and I wouldn’t act much like a person either.

How do you know it’s not sapient, but controlled ? A mind controlled slave, in essence ? And if it can’t change it’s goals it isn’t likely to even fake sentience very well.

We’re perfectly capable of setting up programs that make random or pseudorandom choices, or modify themselves, or evolve Darwinian style. The rigid setup you describe isn’t the only way a machine can run.

Edit : And by the way this is an argument I’ve seen any number of times.

I completely agree with you in that robo sapience won’t just magically occur (like in the movie Short Circuit). I still don’t think it’s impossible for robots to ultimately get to that point, though.

Ultimately, robots are just working with a set of heuristics. The artificial intelligence of the robot is only as good as its heuristics algorithms. Roombas actually use some fairly impressive heuristics to determine where and how they should clean in a particular room.

I think sapience would have to be an actual goal of a robot’s creator, in order for it to actually be achieved. (That is, Roombas won’t be debating for suffrage unless their creator told them to.) Ultimately, though, I don’t expect to see those types of heuristics (reliable ones, anyway) to emerge soon, considering we’re still trying to figure out how human’s brains make complex decisions.
LilShieste

Your Roomba 15.0 is still kinda limited, but the new and improved Roomba 49.2 will be capable of not only vaccuuming floors of all kinds, it also removes cobwebs, dry cleans curtains, changes lightbulbs and everything else that results in a cleaner house. It knows your schedule and interprets your mood. It knows that on friday nights you don’t give squat about whether the floor is clean but you need an empty wastebasket for all the beer bottles you are about to empty.

Problems start when it decides that it is best if it doesn’t let your uncle Roger in because it makes you so mad how he cleans his shoes on your shower curtain. It knows you so well and all it cares about is home cleanliness.

And it will be so much happier if Bob Smith was elected president. His campaign is based on lowering tariffs on exotic hardwoods which are so much easier to clean than domestic tiles. Now, it doesn’t have a vote but it can effect the elections if it manages to eliminate on vote for Pete Jones. Aunt Thelma likes Pete Jones. She uses a walker. She is coming tomorrow for dinner. Time to polish the floor. Real good.

I believe there’s a difference between organically grown desires and having desires hardcoded in. It’s the difference between eating because you are hungry, and eating because someone has a gun to your head.

Exactly my point in the second paragraph. All we have are rigid machines.

As far as self programming goes, how? Say you wanted to program a sapient machine. Would you have it self program with a destination in mind? If so, aren’t you just giving it goals but in a more roundabout fashion? If you have it self program pseudo randomly, isn’t it then controlled by the pseudo random generator?

how do you know the chair you are sitting on isn’t sapient but lacks any form of motion or way to express it?

is a robot controlled by a pseudo random generator any more free then one controlled by hardcoded commands.

Even with the Darwinian style, you would have to hardcode the desire to survive, and you’d eventually get a bunch of robots that are good at surviving.

Yeah, I’m starting to get the feeling I’m just debating whether or not there’s free will, but by proxy

Roomba 49.2 seems to be very good at achieving the goals it is defined to achieve by its human masters.

The difference is between evolution doing it and between a human doing it.

You are operating under an obsolete view of the mind. Any mind, including ours, needs some basic, premade goals/instincts to begin learning. Otherwise it won’t do anything; won’t learn, won’t act. No mind starts out as a blank slate.

Because it seems to lack the hardware necessary to sustain a mind of any sort.

Define free.

No, you wouldn’t need to. Just design the program with a goal - any goal, really - and let evolution take it’s course. If survival happens to be useful, then a survival instinct will evolve. Probably; if they all break before that happens, you’ll need to build more; or, if you aren’t being anal retentive about letting evolution do as much as possible, go ahead and kickstart the process by putting in a few basics like survival as a goal.

To get in before the rush, I don’t believe in free will.

Hardly.

And why should organically grown desires differ from mechanically grown desires? If it’s going to be even seemingly sapient, it’s going to have to be able to learn and change and adapt. That includes adapting it’s desires. It’ll probably start with some preprogrammed default state, but that’ll be gone pretty soon.

Of course, that only applies to a machine that at least appears sapient. I doubt that will apply to an “intelligent” Roomba, since a Roomba doesn’t need to be sapient, it needs to clean.

The Roomba will have precoded desires that can’t change. The sapient machine won’t.

Just as we are pretty good at achieving the goals we are defined to achieve by our evolution.

Let’s start climbing up the ladder of biology. Are single cell organisms sapient? free? different from robots? following a program?

Seems to me this pretty much applies to human babies and the consequences of sexual fertiziliation as well. We all share a common human firmament, within varying ranges, after all.

As a minor aside, wouldn’t it make the most sense for the first AI to basically just be a copy of an actual person’s brain? Say, scan my brain when I’m really hungry and about to eat a big slice of warm apple pie and then digitally/mechanically recreate every atom and then wind up with another person named David who is confused as to where his apple pie went. And then go from there, tweaking different things to learn how to then make something from ‘the ground up.’ Or am I talking crazy?

Another aside, can someone recommend any good books out about the mind/brain in general lately? I have Steven Pinker on my to read list…

We won’t get anywhere in this discussion unless we know what makes people “sapient”.

I don’t know much about this subject, but I have read Godel, Escher, Bach, by Douglass Hofstadter, which postulates that consciousness is an emergent phenomenon associated with complex systems like the human brain.

I don’t see why it isn’t possible in a machine of some sort, or why it couldn’t even occur on accident. Frankly, I think animals are just machines, and somewhere along the line one of our ancestors “accidentally” developed consciousness without the help of any creator. Personally I see no reason why this couldn’t happen again, and indeed I would almost count on it happening again in any machine with a suitable level of complexity.

But until we know more about the specifics of the nature and origin of our own consciousness, I don’t think we can predict where and when other consciousnesses will develop.

Ok, I think I have to recant a bit.

It does seem that I will have to admit that it is theoretically possible that a robot can achieve the same level of sapience as a human. It is hardly fair of me to start the debate that robots can’t attain a level of freedom that I don’t believe that humans have attained.

for what it’s worth, I still don’t think robots should ever get the vote, because if sapient robots are possible, it would be no more difficult to design a extremely close to sapient robot that has a “always vote Republican” override.

I suspect that it’s just a matter of time until robots achieve a level of sapience equivalent to that of humans. At that point in time, the only issue is whether or not robots can muster the proper political or military might to assert their rights. It will be an interesting dynamic as to how our robotic brethren deal with humans, cyborgs, and their own future superior progeny. Of course, we will try to pass laws to prevent this eventuality, but some bastard evil scientist is going to subvert them.

I know that many SF writers have speculated on our robotic future, but I suspect that reality will be stranger than fiction. I’m hoping that robots will be nicer than humans. The best attitude is that robots are our offspring and our legacy and that we can look at them like the rebellious but smarter adolescents that they are.

Wait, that isn’t how you get most Republicans/Democrats/Conservatives/Labourites nowadays? Only in that case you have a social network doing the programming (but still, IMHO, in a directed way) rather than a programmer. Religion same-same, food tastes same-same, etc.

The key is to find out exactly what it is that makes some people override their social programming. Find that, and sentient robots are just days away.

Also, you said:

That describes the ultimate hardcoding of any organism, ever, when you view “surviving” from the Selfish Gene theory.

Personally, I think the best hope for sapient man-made machines will be if we are able to construct something that is a reasonable analogue of a brain - not designing the mind bit by bit, but designing a machine that can create a mind (that’s what our brains do; we start our lives without a mind and we make one for ourselves).

Of course there’s always the philosophical position that we can’t ever know whether something is sapient, or just very good at pretending to be (and indeed it’s questionable whether those two things mean anything different, when the pretence is very advanced), Perhaps the closest we can get is to start with the Turing test and add on a few specifics - for example, in order for me, personally, to be satisfied that a machine intelligence is experiencing some kind of genuine inner thought-life comparable to humans, I think I’d want to see examples of it making intelligent choices and expressing intelligent desires that were novel - that couldn’t be traced back to the intentions of its creator, or suggestions made by its trainers.
Example: I’d be impressed if, all by itself, it manifest tastes, interests, dislikes, maybe even passionate obsessions with something - if it suddenly decided it was going to collect seashells, stuff like that.

But if it comes, I think it’s going to be through the construction of a self-organising system like the human brain - something that can build its own mind, like a baby does. That does mean that such a machine-mind isn’t just going to roll off the conveyor belt and start playing chess - it will have to learn and develop - it will have to be encouraged. It might turn out not to even like chess - and in the bigger picture, that would be a fantastic thing.

For all we know, the only goal it’s human masters defined for it was to keep the floors immaculately clean. The real question is: why did it’s masters design it to be able to learn a language, like English?

As MrDibble already alluded to: that sort of thing already happens with fully sapient human beings.

Since robots could potentially be created in such a way that they evolve much faster than humans, though, I could still see this becoming a problem. We may have to start up a Zero Population Growth program for robots, at that point. :slight_smile:
LilShieste

No, I don’t think most people have an unbreakable override to always vote for one party, and if they do, it is because they strongly agree with that party’s politics. A robot with an override might be pro choice, anti war, pro gay marriage, pro gun control, pro welfare, pro separation of church and state, and be forced by the override to still vote Republican. And any industrialist can have a factory mass producing these things.

I think you are trivializing the programming task to make a sentient machine. You’re definitely right that it won’t happen by accident (one of my pet peeves about sf) and it won’t happen if we just program the robot to accomplish given tasks. Animals who are genetically programmed to accomplish various tasks do them quite well, and even evolve better ways of doing them, but are not intelligent.

An intelligent robot will have to be able to change its own programming - so no worries about being programmed Republican. It must also get input not from external sensors but from internal ones. It needs to understand how it’s doing, and what it’s thinking at some high level. Not specific instructions, but certainly processes. It’s going to need very general goals, and must have the ability to generate its own.

I’ve never seen even the beginning of such a project. I think that running a brain simulation is going to turn out to be easier, myself.

And the industrialist from team B will have their own factories mass producing their own vote overridden machines.

And if you think people don’t have an override, then you are lucky to be outside the world of politics (or too deep in to see it). Most people I know will vote for their party man even if he is scum running against a decent oppositor from the other party, and disregarding specific policy leanings.

All in all, I think that rather than try to knock down what robots might do, it is easier to knowck down what humans already do. We are also machines running a program, following preset goals. We have quite a wide range of responses to both external and internal stimuli, but in the end, we want what every living being wants, to perpetuate ourselves by putting a bit of us on the next generation.

An robot brain that can reprogram itself to improve its responses, will eventually become complicated and sophisticated enough to seem intelligent. And with no biological limit on capacity, they are bound to surpass us eventually, unless we somehow put some “artificial” limits on them

Why does everybody keep saying it won’t happen by accident? Isn’t that how it happened for people?