Unethical scientific experiment?

According to Jeremy Clarkson on a recent Top Gear news report :rolleyes: regarding future accident avoidance in cars: specifically cars in the future will have to choose between preserving your life and sacrificing the pedestrians you were going to plow into, or vice versa.

They did a “horrible” experiment with monkeys…specifically, mothers and babies…where they were all in a cage and the scientists would raise the temperature of the floor. The mother monkeys would pick the babies up off the floor, but when the floor got too hot to stand on, the mothers would put the babies down and stand on them.

Did this actually happen?

I’m watching that episode right now and this is all I could find referencing the experiment.

It doesn’t look like it was actually done, it was just a story told by a character in a novel.

I guess they will just have to include Asimov’s three laws of Robotics in the program:

*1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  1. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.

  2. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.*

And what is the robot supposed to do in the case of the op where there is no decision that does not result in harm to a human?

Baby monkey slippers sound kinda cozy.

Many stories by Asimov and many others who incorporated his three laws into their own works have had plots based around ambiguities and loopholes such as this that are inherent in the design of those three laws.

IIRC in most cases in those stories, the robot would choose the path of least harm; if the choice were particularly difficult or the damage serious, the robot would freeze and or go catatonic - basically, it would do the equivalent of “I can’t deal with this situation”.

I wonder if the car programming would be along the lines of “miss the pedestrian, aim for the brick wall or telephone pole, unless the riders don’t have their seatbelts on, in which case take out the pedestrian”.

I would expect that driverless cars would refuse to start until the passengers all had their seatbelts fastened.

The three laws are implemented as a sort of variable-strength input based on the situation. The robot chooses the action that results in the lowest negative inputs, using the three laws to prioritize signals of different types.

So this situation is no challenge at all to the three laws. The impulse for self-preservation is over-ridden by a higher-order impulse (to preserve human life). The impulse strength for one death is lower than the impulse strength for three deaths, so the course of action chosen would be the one that results in a single death.

Not that we can implement the three laws using modern technology, mind you.

People deal with this scenario today.

When the choice is drive off the cliff or hit the pedestrians, most drivers choose the pedestrians and never even consider driving off the cliff as an available option. It’s too “unthinkable” (a word I hate). And nobody calls them bad people for failing to commit suicide as an accident avoidance technique.

Speaking just for me, if I’m ever in an unavoidable T-bone situation and the choice comes down to maneuvering to best protect my passenger or you or yours, well … it sucks to be you.
I would imagine an automated car would include similar logic. I would propose that each car prioritizes its occupants over others. Understand these cars ought to be able to avoid many collisions that human drivers wouldn’t. So we’re already starting out with a net safety benefit at the societal level. All we’re debating here is how to share out the inevitable failures to avoid 100% of accidents.

I think we’re straying a bit from the topic of monkeys on hot floors.

How old did you say that pedestrian was? Their life expectance vs. mine and passengers? Perhaps we can values assigned to us and a sensor in the robot/car.

Hot monkey babies (band name - preferably all-girl). Self preservation, I would think, rules. You can always have more babies.

Besides which, all these scenarios about robocars in no-win situations are being discussed here and there in the current thread about the costs/benefits of robocars.

IME something like 70% of discussions on autonomous cars seem to mention this dilemma.
But first of all it’s actually very difficult to imagine scenarios where this could happen, bearing in mind the car will always be aware of safe directions to swerve to, (and ultimately could ask other cars to give way). And secondly, no, just as humans don’t make a decision about who’s life is more important (because there is no time to) neither will the cars. They will try to avoid collisions, or failing that just try to make the collision as slow as possible, the same as a human driver would.

True, but additionally, how would an autonomous car even get itself into that situation?

As I said in another recent thread when this same question came up: an autonomous car rounding a blind corner on a mountain pass would likely slow down enough that it could stop in time even if it encountered a pile of rubble round the corner. Or slow enough that even if a tyre were to blow out it could control the car such that you don’t go off the cliff.
In driving near a cliff scenarios most people would prefer the car err on the side of caution.

This “choose one: them or you must die” plays out when a pilot is forced to crash.
From any given altitude and speed (plus a bit for wind), any aircraft which loses power WILL come down within a given radius.

There is a nice road - maybe we could line up on it.
There is a wooded section we probably would not survive landing in.

Which do you choose?

What if the road is next to a playground filled with kids?

The loveable overgrown schoolboy Jeremy Clarkson has on occasion reportedly to have made remarks with the intention of getting the rise out of people.

This may be a possibility…

I’m dubious even about a scenario like this.

I’m no expert on flying but presumably on making an emergency landing you want to avoid colliding with any large solid objects whether they be trees, cars or buildings.
Roads with traffic, or forest, are both suicide, and you’re looking for open space.
Land in an open field and you may well kill a hiker or jogger say, but that would likely be accidental as you’d need to be very low indeed to have seen them.

As for the playground with kids, how would that play out?
So the playground has no solid objects at all that would put you off landing there anyway; no buildings, no parked cars and not even a fence separating it from a perfect, car-less road.
There’s nothing else further along or before on the road to dissuade you from landing on it, but there’s no time to try to land significantly ahead or behind the playground. Yet there’s a dilemma here, so you have time to do something, just not that.
I don’t buy it.

From the NTSB http://www.tailstrike.com/151004.pdf

CAM-1 is the pilot. He just realized he is too low to reach the runway.

(the ‘#’ means “bad word”)

Both he and co-pilot were killed in the ensuing crash.

And?
There was no dilemma here of sacrificing or saving lives.

They didn’t want to fly into houses because they would have died (as well as potentially killing people who were home).
And what ultimately happened? Judging by the transcript, they couldn’t get control of the aircraft and died when it flew into house(s).

I don’t know about the monkeys, but sounds like a variation of the trolley problem. Note the final section on “implications for autonomous vehicles” and its two citations.