Watched Werner Herzog’s *Lo and Behold, Reveries of the Connected World *last night, highly recommend.
For now, it seems clear the masses are too busy dicking with their phones to realize technology is racing far out in front of their cognizance that they should care.
Anyways, at some point, sooner or later, an autonomous car will be in a situation where it has to execute a maneuver that either kills the sole passenger, or more than one person outside the car.
What’s it gonna do? Who’s going to decide how to program it?
You use an algorithm, like the one in US drones - hopefully the car will be less inclined to repeatedly kill dozens of innocent women, children and families attending weddings.
It will decide based on what local laws dictate. Legislators don’t care about whether it’s “your car” or not, they’re just interested in the greater good.
Ergo, it will choose to take out the fewest number of people, whichever that happens to be.
This kind of dilemma ignores that a real autonomous car won’t be perfect. The sensors could give bad information, or the algorithm could interpret them wrong. So when you decide to avoid persons outside the car, an important question is how often the car will detect persons that aren’t actually there?
An ideal autonomous car wouldn’t get into this dilemma in the first place, it would detect the persons early enough to avoid a situation where it couldn’t avoid them without causing an accident.
Under more realistic conditions with imperfect information, the best action in my opinion would be the least complicated one, just brake hard if you can’t avoid a collision, nothing more. The more complicated the action, the more likely it is to have unintended consequences. And while braking hard can still be dangerous, it is probably the least dangerous reaction to a sensor or detection failure.
It will take a while until the technology is good enough not to kill all of them.
I, for one, do not believe truly autonomous driving on regular roads in the next 20 years. It will be 90% ready, then 95% then… like the Zeno paradox. Eventually they will embed sensors and markers on highways and allow only smart cars on those.
Society will not accept truly autonomous cars until they drive much better than the average driver, and I don’t see the technology for that.
Should “you” kill “you” or two pedestrians? and how confident would you be of your ability to make and execute that moral and ethical dilemma in the heat of the moment?
It certainly is a question of societal acceptance however I think the technology is pretty much there already to be better than 95% of drivers in 95% of situations.
Those pesky last few percent are the problem though. Google could produce data that shows totally autonomous driving would cut deaths by 50% overall but would have to admit that a small fraction of the deaths that do still occur are down to failures in the software.
Now no-one is going to pay attention to a front page that says 50 deaths were avoided today but will lap up a story about a single blond cheerleader that was decapitated due partly to a lidar malfunction and software mis-identification.
Well that’s a good question and in the simplest examples I’m thinking of, in humans, reflexes and self-preservation probably override any moral or ethical consideration.
You’re driving on a highway at highway speed with let’s just say crowded traffic, not heavy enough so that you can’t drive the limit, but enough that there’s cars all around.
Something car-sized drops off the truck in front of you. No question of whether you can stop in time if you stay in your lane. You’re going to swerve without worrying whether you’re going to bring other people/cars into the equation, at least I think anybody who didn’t completely freeze would.
Autonomous car, someone’s already researched the probabilities of the outcomes, someone else has decided which is the least “bad,” and that’s what the car’s gonna do.
Or it should triage the pedestrians first, and then inflict the same injuries on the passenger. And also communicate with all of the cars in the passenger’s family, see if anyone in the passenger’s family is in one, and do the same to them. Or maybe the refrigerator or the toaster can take them out.
What should a human driver do when facing the exact same problem? And what does society do after the human driver acts on that decision?
After you’ve figured out your answer to that one get back to us.
While you’re at it, describe the decision process in the real world when nobody can say for 100% sure the soon-to-be accident is necessarily fatal. IOW, your straw man is made of unobtanium. Try making a more realistic one out of real materials.
This is one issue I have with auto-driving cars: how much will defensive driving be built into them? How good, exactly, can you make the AI? How well can you get it to mimic the thought process of an expert driver?
For example, us humans have intuition, which means in a driving situation if I see an anomalous event up ahead, or just something which tickles my funny bone, I am going to take some action of some sort or another to avoid any possible complications in 5 or 10 seconds.
In heavy traffic I tend to drive very defensively, and when someone on a side street comes charging out (only to slam his brakes on at the last moment), I am going to make sure I will either miss him, by changing lanes or slowing down to ensure I can brake in time if he doesn’t stop. Likewise if I see an animal or a child near the road, 20 seconds ahead, I will likely slow down and keep a sharp eye on them. Will the AI be able to anticipate possible complications like that? The impression I get (reading between the lines) is that it does well at reacting, but isn’t much good at being proactive.
In other words, an expert human driver will be sure to do something far enough ahead of time such that said Kobayashi Maru type choice likely won’t be forced on him.
You’ve dodged my questions. The ones that are necessary to answer first in order to have a meaningful discussion of the real situations legitimately surrounding your artificial straw man.
My answer: The autonomous vehicle should preserve its occupants at the expense of other vehicles, structures, and pedestrians. Relative head count is immaterial.
Why?
Because any other option will impede adoption of autonomous vehicles. People will not get into a vehicle designed to kill them to save strangers.
They presently drive with the *de facto *attitude they’ll preserve themselves first. Although they may not have thought about it specifically, that’s what the overwhelming majority actually do.
Why wouldn’t they demand the same from their automated cars? And why wouldn’t we, as a society, endorse that view? It’s the one we endorse today.