Autonomous Car - Should it kill you, or two pedestrians?

No disrespect to the OP, but I’ve seen at least 10 threads on this topic on the Dope, and it’s debated widely elsewhere.
You would think this is going to be an everyday occurrence, and self-driving cars will never see the light of day until we tackle this problem.

In reality it’s pretty hard to contrive a scenario like this.
For example, with the typical mountain road example we can just ask ourselves How quickly would a self-driving car round a blind corner with a deadly drop on one side? And the answer is of course: slowly enough that it could stop in time if there was a pile of rubble around the corner.

Similarly with the example of people walking out in the road suddenly. Generally the cars have to be able to find a swerve path not just for humans magically appearing in the road but generally for obstacles that may appear. Since they have sensors in every direction and much better calculating ability, they’ll be much better at doing this than humans.

But failing all that, they’ll just seek to stop the vehicle as abruptly as possible. No deliberate choice of “Her life or the driver’s?”

This is the real answer. The robocar wouldn’t get into this circumstance in the first place except for pedestrians acting recklessly. So recklessly that it makes sense to hit them instead of driving the car off a cliff. Cameras in the car would show that circumstance and the car would be held blameless. The car simply cannot drive at a speed where conditions would allow such a catastrophe unless pedestrians act with extreme recklessness. The real problem occurs when the car fails to prevent the circumstances from arising and that will happen even without putting the passengers at risk.

As mentioned upthread, such a system (one that makes a deliberate choice to kill/injure the occupant rather than collide with a pedestrian) would not only slow down the adoption of driverless cars (resulting in massively more pedestrian and driver deaths than it would otherwise “save”), but it opens the door to all sorts of complicated ethical decisions that really have no business being left up to a machine’s programming. Should the car aim for the elderly instead of a young child? Target one pedestrian instead of a group of three?

Much better to keep it cool, simple, and emotionless. When a collision is unavoidable, brake hard and minimize damage to the occupant/vehicle. Note that this still favors pedestrians a great deal because if the goal was merely to protect the driver, there would be scenarios where the car would want to accelerate into a group of pedestrians in order to avoid a head-on collision with an oncoming car. By flagging pedestrians as collision objects and prioritizing hard braking whenever a collision is unavoidable, you’ve done what you can.

Assuming machine vision/radar systems are good enough to tell the difference between something like a 300-lb deer and a human, I can see the logic having some natural weighting towards protecting the pedestrian and flagging it in a higher priority for the “Do not hit this object” collision list, even if it means more damage to the vehicle by hitting the deer. But if the choice is between a head-on collision with a solid brick wall and a group of elementary school children crossing the road, I’m sorry, the car is just going to brake and hope for the best.

Or yeah, put it like this: the more constrained a situation becomes, the fewer swerve paths the car has, the slower it will travel.
So it’s very hard to imagine how the car gets itself in a situation of: “Oh shit, there are no safe swerve paths and I’m going too fast to stop in time!!11!1!”

You don’t work with developers, do you? :wink:

I’m not suggesting anything that complicated. I think any automated car would need an image recognition system to identify potential road hazards. It would be logical to program the car to preferentially avoid things that look like people and provide more clearance when passing them - if only because pedestrians can wander towards the car’s path, while a mailbox likely won’t. I, a potential buyer, would consider that a desirable feature.

I’m sure that would be the case most of the time. But surely a car would also swerve to avoid collisions if, for example, something suddenly appeared in its path and there is clear space around it.

Though now that I think about it, if the car were programmed to avoid pedestrians at all cost, a malicious pedestrian could jump in front of someone’s car, causing the car to crash into a nearby obstacle. So I may have to modify my position on this.

See step 2 from the logic sequence in post#31 upthread. Collision braking only applies when no maneuvering solution is available that prevents a collision. At that point, you have to hit something.

Now I’m curious. DO current autonomous vehicles swerve to avoid collisions? Humans are surprisingly good at pulling over on the shoulder when a car ahead brakes unexpectedly. Do autonomous vehicles now do the same thing (when simply applying the brakes clearly won’t be sufficient)? Or do they rely on maintaining proper spacing and speed to avoid all such situations? I feel sure they have the ability to decide “Hey, I can’t stop in time.”

Several friends and family members of mine have been in situations where they had to swerve and leave the road at fairly high speed, usually because something fell into the road or an oncoming vehicle wandered into their lane. They distinctly remember aiming the car at what they felt was the best “target.” They avoided concrete abutments or going into a river in favor of bushes or even a steep upwards incline (adjacent to an overpass). Does current autonomous vehicle programming actually address these types of situations?

Just wondering as I work on my matter transporter.

This is the computer equivalent of The Trolley Problem. And it’s a serious issue and could be a real impediment to the widespread adoption of autonomous vehicles.

Here’s a situation: A child runs into the road, and the car does not have time to stop. There are children on either side of the road, so if the car swerves there is the possibility of hitting and maybe killing even more children. What do you do? We grant humans a lot of leeway in this situation, because we understand we’re not perfect and snap judgements are often wrong. But what if an algorithm is programmed to say, “Hit the kid in the road, rather than risk hitting two children on the sidewalk”. Now we know there was a deliberate choice made. And if there’s any chance that those kids on the sidewalk wouldn’t have been hurt, you can bet attorneys will enter the mix and we will have a big lawsuit against the car company.

Or we can make it more realistic: A dog runs into the road. Swerving to avoid it risks losing control of the vehicle and harming the passengers inside or others in the street. This scenario occurs all the time, and people are killed every year trying to avoid dogs and cats and other animals that suddenly run into the road. Swerving is a very human thing to do, even though by most moral calculus it would be better to just run the dog down.

When a human swerves to avoid a cat or dog and loses control of the vehicle and kills someone, we tend to understand. We certainly don’t blame the car company, and the human probably won’t be charged either unless it can be shown that he was speeding or driving recklessly. But if autonomous cars are programmed to just run down anything that jumps in front of them, I predict there will be major social implications.

We still have no idea how autonomous cars will be accepted by society, but social acceptance or lack thereof can kill a technology dead. Remember the Segway? Lots of smart people thought it was going to revolutionize transportation. It solved the ‘last mile’ problem, which would open up mass transit to more people. Some thought it was so important that cities would be slowly changed to incorporate them. But the Segway never took off except in niche markets, because social acceptance wasn’t there. Riding one looked geeky, they didn’t pack well into elevators, and pedestrians didn’t like sharing sidewalks with them.

Or look at Google glass. It met all the technology goals, and was a pretty cool and useful thing. But Google never considered that wearing them made you look like a douchebag. And no one liked the idea that they might be photographed or recorded on video when talking with someone wearing Glass. So the product failed in the social marketplace.

We will have to wait and see if something similar happens with autonomous cars. One early accident that kills a handful of children could doom the entire concept. Or perhaps we won’t have an accident like that until the product is firmly established in the marketplace, and our morals will change and adapt to the product. The future is unknowable, but the kind of problem called out in the OP is certainly a major risk.

Yeah, but an algorithm isn’t going to be programmed to “hit the kid in the road”. You’re not going to look at the source code and see “if (jaywalker instanceof Kid) { runOver(jaywalker); laughMenacingly(); }”. It’ll be programmed to brake hard and stay on the road in the general case of unexpected obstructions. No doubt you’ll get lawsuits, but I’m not sure this will be the deal killer you’re making it out to be. I don’t think society rips apart a human that braked hard but still hit a kid that jumped into the road, especially if he was alert and obeying the speed limit and it was just a tragic accident. If the algorithm does the same, and it actually is shown to have reacted faster than any human driver could have, it may not be the political doom of automated cars. Especially if there are human cases that are worse and automated cases where the faster reaction time actually resulted in minor injuries rather than death.

Perhaps call it “morality neutral”, have it ignore human consequences and behave in a way that minimizes damage to the car?

Part of the problem isn’t that these are black-and-white decisions with known moral outcomes for each choice. The problem is that these are decisions made under risk. The car could make the right decision mathematically and still have it go horribly wrong. Perhaps it decides that swerving carries the least amount of risk in terms of lives lost, but the road has oil on it that the car couldn’t detect, and the swerve takes the car into a crowd…

We give humans a lot of moral latitude in these situations because we can empathize with them and understand that snap decisions are not morally the same as calculated choices. But will we be willing to give an autonomous car the same benefit of the doubt? Or the company that makes it?

Are you suggesting that moral equations can be stated mathematically, and a have “right” answer? Can you give an example?

This may or may not be true, but I’d argue that your examples are irrelevant to your claim:

Social acceptance was never the issue. If their convenience in some ways had outweighed the combination of price, cheapness of competing alternatives (bicycling, walking), and the inconvenience of Segways in other ways, then they’d have caught on even if many people thought they looked silly, just the way Bluetooth phones have caught on, even though it looks silly as shit for someone to seemingly be talking to nobody with no phone visible.

I’d be willing to bet that 99% of Americans didn’t get any further with Google Glass than wondering if it would catch on with people more into the latest tech than they were. I’ve never knowingly seen a person using Google Glass. I’d bet it didn’t catch on because even people who jump on the Next New Tech Thing didn’t find it to be all that great.

And people like that usually don’t worry much about looking too geeky. Being geeky isn’t exactly stigmatized anymore, and that’s just in the general public, let alone in the circles of people who would have been the early adopters of something like Google Glass.

Anyhow, if autonomous cars ‘work’ in the sense of being 100% self-driving, and getting you from Point A to Point B at least as safely and reliably, and almost as quickly, as you could drive yourself there, then people will buy them. I certainly would, once they get to be, say, only 20% more expensive than cars without self-driving technology.

A two-year-old’s solution to the trolley problem: https://youtu.be/-N_RZJUAQY4

:smiley:

Bolding mine.

Overall I agree with your perspective. But ref the bolded part, you (and several subsequent posters) have made what I think is a defective baseline assumption.

People drive all the time in such a way that they can’t stop short enough if an obstacle suddenly appears. Folks crest hills on the freeway going 80mph to discover traffic stopped ahead. People drive 40mph on winding roads where if you’re the first car around the next bend after a boulder fell, you’re gonna hit it. At night in rural areas people drive far faster than their headlights can illuminate.

All that’s normal behavior, not reckless. Autonomous cars will have to do the same to fit into traffic.

To be sure, if you’re trapped into an upcoming crash, better to hit whatever going as slowly as possible. So max braking, consistent with still having some residual traction to steer, is going to be a big part of automated cars’ emergency response repertoire.

What we cannot practically assume is that the cars will always drive slowly enough to stop prior to any possible surprise.
As a separate matter, There are vast numbers of scenarios where the way to avoid an accident is *not *to stop before you get to it. But rather to speed up or to maneuver around it. Having automated cars performing “panic” stops regularly in mixed auto/manual traffic pretty much guarantees they’ll be unacceptable during the all import transition period where they represent 5-10% of the vehicles on the road. Screw that up and the tech is dead for decades, if not forever.
There’s also the issue that even if *you *can stop short of the surprise ahead, can the vehicle behind you stop short of you? To be sure, automated cars will all always be paying attention, unlike so many drivers today. That will help. A lot. But an automated Tesla or BMW will have better brakes than an automated Toyota Yaris. Or a big truck, automated or not.

So the automated Tesla ought to be “thinking” about avoiding being rear-ended by the Yaris or the Peterbilt in addition to “thinking” about stopping before hitting the stopped traffic ahead. Just like you and I do when we drive.

Leaving aside whether it is normal, it is definitely reckless. If you hit and kill a pedestrian as a result you should go to jail.

That’s insane.

IMO you’re mistaken if you think you can stop in the distance you can see ahead on an interstate while cresting a hill or even an overpass. You’re also mistaken if you think that driving at 25mph on a residential street you can stop before any pedestrian could possibly jump in front of you. People on rural roads smash deer standing in the road all the time. Because deer don’t glow in the dark and the people are driving beyond the edge of what their headlights illuminate.

It is essentially impossible to drive much over 10mph and not be at risk of being unable to stop before an obstacle appears in your path.

That the roads aren’t strewn with roadkill is a testimonial to just how rarely an obstacle or dead stopped traffic appears from nearly nowhere.

No, I disagree.

People imagine it that way because they imagine e.g. blind bends being an everyday occurrence.
In reality, a true blind bend where you have no visibility around a corner, and the road is too narrow to position yourself to see round as you’re going round are not very common, and where they exist usually have a stop sign.

Or for the examples you gave: I’ve crested hills where I find a queue of traffic on the other side plenty of times. I’ve never had any near misses, because if it was genuinely a blind hill, with the road abruptly ending and then just sky from my POV, then of course I slow down a lot. You’d be a maniac to go over that at 80mph.

I do take your point about night driving though. If someone or something were to quickly run on to the freeway or country road at night, and they don’t have any lights, or anything reflective on, they are likely to get hit and hence why roadkill is a thing. This isn’t so relevant to autonomous cars though, which use sonar among other sensors.