Because people don’t think risk through?
I like the scenario where everybody dies. That’ll learn 'em.
Because people don’t think risk through?
I like the scenario where everybody dies. That’ll learn 'em.
I had completely skipped over your post #9 where you gave a thorough response. :smack: Sorry to have sounded snarky; my “no you” counter came out harsher sounding than I meant.
FWIW, I agree with your reasoning in that post too.
trouble is the human does not and can not face the exact same problem. The human has to make snap decisions, the computer falls back on logic. It would be equivelent of a human training for this exact circumstance to occur, along with seeking great masters of wisdom to pre sort out the ethical implications.
Totally disagree, because no one is going to allow that on public roads. The person may desire that, but since it would be illegal and not available, they will get the cars that save according to some greater good.
Is the GOP in control?
A simple program to detect what the car is (and thus its monetary value).
Let’s see: I’m worth $142K.
I am usually parked in a Palo Alto CA (home of Stanford AND Silicon Valley).
ergo, my people are in the 90th percentile of USA population.
I am now operating on a surface street in West Oakland.
My occupants are likely worth more to the world than the people on these streets.
I have one occupant. She or he is worth more than 5 residents of West Oakland.
The break-even score is 8 West Oaklanders.
Choose appropriately.
You sure there will NEVER be such Machiavellian instructions?
I envy your assurance of that.
All autonomous cars will be fitted with a “ethical conflict resolution protocol” for just such occasions, i.e. five sticks of dynamite.
Ummm… I know today’s hipsters love the effects of the Victorian technology (“Steampunk”) (my gawd, spell-check knows the term).
But: dynamite is too old-style for this age; we will need a plastic explosive molded into the bumpers, dashboard, and backs of the front seats.
Actually, Claymore mines or their just basic design - propel hard and sharp bits at great speed into whatever happens to be nearby.
There are autonomous cars driving in public today. Not in great number, and just as industrial research projects.
You sound very sure of what they’re actually using for “ethical” programming today. Care to post a cite?
My belief is there will never be the precision of evaluating the consequential harms of various scenarios. Folks who think of autonomous vehicles as being a giant list of hyper-specific “If *this *exact scenario happens do exactly that.” are utterly missing the point of how autonomy is implemented and controlled.
The machines have / will have a hierarchy of general goals. “Don’t crash” is one. “If you must crash, aim for something flexible in preference to something stiff” sounds good, but that assumes the computer can see the difference between a parked car and a bridge abutment about the same size and shape. Which it may not be able to do.
There (probably) never will be “Count the pedestrians and aim for fattest softest looking ones so the fewest are harmed.”
As has been discussed repeatedly in the umpteen threads on autonomous vehicles, they will drive differently than humans. The errors they make will not be the errors humans make. The net carnage will be vastly reduced. But it won’t be zero. And we collectively need to accept that. Or sentence 10s of thousands of Americans to be killed every single year out of fear of these different but vastly fewer mistakes.
It seems pretty damned obvious to me which is *really *the ethical choice.
Of course, if the car was really ethical, it would just refuse to operate so it wouldn’t contribute to global warming.
Turns out Nomad* was right. “Find and sterilize imperfection” is a Good Thing. Maybe if we all think only Good thoughts it’ll let us live. Who knew the Harbinger of Doom would look this cute: https://www.google.com/search?hl=en&tbm=isch&q=google+self-driving+car
===========
I expect the autonomous/semi-autonomous logic to be something similar to the below:
Is a collision imminent if present course is maintained? If so:
Is there an emergency maneuvering solution available (using only road pathways that are predictable. No jumping curbs, etc.) that avoids a collision? If not:
Brake immediately while maneuvering for object that results in the least amount of total impact energy to driver at time of collision(s). If collision course is unpredictable, just brake while maintaining present course.
I don’t think the logic needs to be programmed to make any further ethical decisions beyond that. Priority should be to protect the driver, always. If the car is suddenly faced with a crowd of people in the middle of a mountain road, and the only alternatives are to drive off the cliff or crash head-on into the side of the mountain, the car should brake and aim for the pedestrian that has the least velocity relative to the car at the expected moment of impact.
There isn’t any such algorithm in U.S. drones, in the sense of some computer program that automatically carries out “signature strikes”. All those U.S. military decisions to kill people–shooting Osama bin Laden in the face, or killing dozens of innocent women, children and families attending weddings–are still made by human beings. Drones are not really “robots” in a Terminator/HAL 9000/C3PO sense, they’re just airplanes that are remotely piloted by a human being who isn’t physically sitting in the cockpit.
The U.S. drone warfare program has very little in the way of lessons (good or bad) for how we should think about self-driving cars. Self-driving cars really would be “robots”.
This is the real answer, IMHO. The car is going to be programmed to behave in specific ways, based on technical considerations not ethics. This will probably be similar to how a professional driver with a sense of self preservation would behave.
Less serious answer: let the market decide! Register an amount you’re willing to pay to avoid being chosen as the victim. If the car has to choose in an emergency it can quickly make the calculation to maximize revenue. Didn’t register? Too bad. They left their houses, they knew what they were getting into. I say, let 'em crash.
I would disagree. If some type of accident is inevitable, the algorithm should favor an outcome that would result in fewest fatalities, even if it means endangering the passengers. For example, if the choice is between crashing into a pedestrian or crashing into a concrete wall, it should choose the latter, because the occupant of a car is more likely to survive a crash than a pedestrian.
Volvo has already announced that they would accept liability for their self-driving cars (though believe they don’t sell self-driving cars yet and therefore the issue is academic). If this becomes the trend, that should provide the right incentive for car manufacturers to program their cars to minimize fatalities, not just protect the passengers.
Are you going to mandate that everyone buy driverless cars? Or just let all of us continue buying human-controlled death traps until you change the robot car’s program to actually protect its owner? Because nobody is going to buy something that will kill them on purpose, no matter how utilitarian the result.
But some of our sophisticated missiles are getting closer to that Termination/HAL 9000 robot sense.
ETA: LRASM for example, may be called upon to distinguish targets: “Is that the Chinese aircraft carrier I’m hunting for, or just a fishing trawler”
I intend to hack my autonomous vehicle to convince it that I’m a billionaire. “Siri, call me Daddy Wharbucks.”
Leaving aside that the meatware who currently make such decisions get it wrong as well, the answer to ‘Who’s going to decide how to program it?’ is that, really, no one person does that. You have a baseline code (worked on by a team and then tested in virtual and real environments before being rolled out for more extensive testing in more variable environments), but that’s not what the consumers get…or will get when these things are fully rolled out. What basically is happening right now is the software is put out and then it’s (weak) AI is adaptive and learns from experience and feed back from the users. The more folks use it the more it adapts and learns. Like the folks who drive and those who program it, however, it’s not infallible, so when some person driving another vehicle puts the AC into a situation where someone will die (swerves into its path causing it to calculate that it has to do an emergency lane change which happens to have a crosswalk full of puppies and nuns) it will probably swerve as it won’t have had time to calculate all of the permutations of the situation…much like a person driving wouldn’t have that gods-like ability.
At that point the survivors as well as the families of those killed will probably then sue everyone, so it will again be no real difference between AC and meatware drivers in the end.
The car isn’t going to be smart enough to analyze whatever you’re going to run into and determine the likelihood of you dying vs someone else dying. The number of variables is insane and the outcome far too random.
The overwhelmingly vast majority of the time, hard braking is the safest thing to do when an accident is imminent. The overwhelmingly vast majority of the time, staying on the road is safer than going offroad and running over something. The overwhelmingly vast majority of the time an autonomous car will have already started braking before a human driver would realize an accident was imminent.
I’ll also run by you the idea that no company is going to commission an algorithm that deliberately instructs a car to run over a pedestrian. A car will identify an object as a human being, then change course to run that object over? Never going to happen, the lawsuit they’re going to lose would be epic. It is not necessary to program the car to analyze the potential outcome of an accident and choose a different accident, you simply have to stop the car as quickly as possible, and stay on the road.
The correct answer is that the car goes into roblock because its First Law imperative to prevent the pedestrians from coming to harm comes into conflict with its Second Law (as weighted by the First Law) imperative to convey its operator safely to his/her destination, and a traffic jam ensues when it stalls in the middle of the intersection.
Rest assured that Traffic Inspectors Olivaw and Baley are hard on the case.