There’s a story in ‘i,robot’ where an ‘advanced’ robot gets stuck circling a silenium pool on Mercury. The robot’s brain has stuck in an equilibrium of differences where rule 3 (protect it’self) is in conflict with rule 2 (obey orders). It is a specialized robot in that rule 3 is given a higher than normal priority and rule 2 a lower than normal priority.
If the robot moves further in it’s desire not to be damaged outweighs it’s desire to obey an order. If it moves further out it’s desires are reversed. So it’s stuck in the middle. And somehow gets drunk :rolleyes:
Question 1: Can’t the robot solve this problem with logic - “I am clearly not able to retrieve the selenium, and thus unable to obey an order. So I must concede that the order is unobeyed and return to my master”
Question 2: What would R. Daneel Olivaw have done? Given his seemingly human-like but superior reasoning skills.
As seen in the story, Speedy will continue until he ‘dies.’ Not retrieving the selenium is no option, per the second law:
Therefore, I believe Speedy would keep trying to obey its order, unless the first law superceded it (as Powell and Donovan did.) Logic cannot override the Three Laws. Donovan could have said “Speedy, go get the selenium. If you have any problems, come back.” As Powell found out, Donovan didn’t place much importance on getting the selenium, thus the conflict. This is all IMHO, of course.
More precisely, he gave the order in a very casual manner, as if he were asking Speedy to go to the fridge to fetch him a beer, thus causing an unusually low Second Law priority.
As for the original post, the problem arose because Speedy was a fairly primitive robot. R. Daneel would probably have been smart enough to do better (ask for clarification of the order, figure out a less dangerous way to get the selenium, etc).
There’s a scene in The Robots of Dawn, I think, where Bailey asks Fastolfe about another of the stories in “I, Robot”, and Fastolfe tells him that modern robots are advanced enough that that sort of logical paradox doesn’t cause a problem anymore, and in fact, one of the plot points in The Robots of Dawn is that it’s impossible to accidentally set up a situation in which a modern robot will be forced to shut down like that.
That’s the bit I was thinking of. I think he was talking about first law conflicts, and said something like the more advanced robots’d save whoever they could most likely, and pick one randomly if not. I might expect a similar 'pick one ’ resolution in the selenium case.
For someone like Daneel, I’d expect he’d (1) recognise the conflict wasn’t immediate and get more orders or (2) realise that the selenium was going to become urgent, and let the first law take over.
Later in his career he might even (3) think his ministering of humanity as a whole was so important he couldn’t risk himself.
However, I get the impression that sometimes Asimov thinks of the laws as more binary, and sometimes more continuous. In this story, I got the impression that the chance he could get closer was enough to drive the second law in keeping him there, so even an advanced robot could get caught – it depends how closely going home and giving backchat first is to obeying an order.
If I recall correctly, Speedy was designed with a hightened sense of self-preservation (Third Law), because he was a very expensive prototype. This, combined with the sort of lackadasical instruction from Powell, set up the imbalance that the story, ah, rotates around.
To nitpick, Speedy doesn’t “get drunk”; he simply displays a logic conflict that manifests itself as drunk-like behavior. Once the conflict was broken (no spoilers, if you’ve read the story you know what I mean), he “sobers up” immediately.
I am not a (real) computer scientist, but here’s a couple ways I could see the problem being resolved, alone or in combination:
Take into account the tacit assumptions associated with such an order. Using the beer fetching example, I expect the robot to get the beer in a short amount of time and with some low level of acceptable fuss. If it finds the fridge door closed, I expect it to open it, but if it finds a lock on the door, I don’t expect it to get a blowtorch and cut out a hole in the door instead. I also expect the beer in the next minute or two, so waiting for the lock to open by itself is only an option for so long. Once the time limit is broken, the robot can consider the order incomplete and return, probably reporting the problem.
Project the likely outcomes of various actions. Rather than just wait for a requirement to be broken, analyze various options for proceeding and rule out all of them as leading to unsatisfactory results. Probably best accomplished in conjunction with point 1, but even without it the robot could conceivably project that the selenium will not be reachable before it’s battery life runs out, for instance, and deem the order incomplete that way.
Infinite recursion detection. The robot at some point returns to a state that is identical to an earlier state and uses the fact that its earlier action did not lead to the desired result, so that action can be ruled out; the tricky part is identifying that identical state. For instance, when the robot first turns away from the selenium and then considers turning back to it again, the robot could detect it tried that before, got nowhere, and things haven’t appreciably changed, so it can rule out trying to approach the selenium again. The robot then has no option for completing the order and assumably would consider it incomplete and return.
Pre-Zeroth Law Daneel would likely go out, realize that the selenium is unobtainable without damaging itself to a point where he would be unable to bring the selenium back anyway. At that point he would probably contact his master for new orders.
Post-Zeroth Law Daneel would do one of two things either the same as above replacing getting new orders with finding a way around it himself, or not bother with following the order at all because the effort spent inretrevial would probably be better used serving humanity some other way.