Self-driving cars

You’re designing a self-driving car.

Obviously there are scenarios in which the self-driving car or even a human driver would be incapable of avoiding an accident. I.e. a tire blew out, a la Firestone.

So in that scenario, what would you have the car do? The car is on a highway, and it can either try to steer off a cliff and therefore kill only the driver or the opposite way and possibly slam into oncoming traffic.

Is there a right decision?

The car should pull over to the right side of the road and break. But in reality, there’s a huge gulf between self-driving cars on previously mapped roads and self-driving cars in real conditions. It has to work in the most adverse conditions and be substantially safer than normal driving (because people don’t like the idea of not being responsible for their safety), and we’re just not close to that any time soon.

I understand Paul Moller is getting into them. :smiley: :rolleyes:
Hey, that was post 11,111… cool.

From a practical perspective, if people think a robot car might decide to kill them then they won’t buy the car so the question is moot. It’s a consumer product, so issues like that matter.

And then there’s the problem that asking a modern or near modern computer to make that kind of decision is likely to end in disaster, since we’re a long way from building AIs even close to human in judgement. Give a car-driving computer the freedom to make that kind of choice and you may well find it killing its owners by mistake. Or even worse, many computers all with the same flaw killing hundreds or thousands of their owners.

Which real conditions don’t involve previously mapped roads? As someone who drives to work everyday along the same route the Google cars take, I assure you that these roads are real and populated with morons. A self-driving car would do better than a very large number of drivers I see.

I believe the OP is describing the dilemma which is hitting the news, which is how would a car make an ethical decision to decide what to do when someone is going to get hurt. Should the car’s occupants get priority? Should we minimize damage? If someone outside did something to cause the problem should they suffer the consequences? That’s an interesting problem.

You got it. Any answers for these questions?

It’s a fun problem, but not one that should hinder the spread of self-driving cars. The perfect is the enemy of the good. Just getting drunk drivers off the road (when all cars are self-driving) will by a HUGE factor make up for the occasional “fielder’s choice” driving decision which might get flubbed by a computer.

Yes, of course we’d rather die because of an AI’s confusion than a drunk’s.

(“Getting drunks off the road” is not a compelling argument for self-crashing cars.)

I’ve had a tire blow out. My first inclination wasn’t to drive off a cliff or into oncoming traffic. What you do is maintain steering while applying the brakes and get the hell to the side of the road. Why would a programmed car not attempt the same thing?

Until I left California, I might have agreed with you. Driving across the country, with only brief jaunts off the interstate, I learned that there is a lot of REALLY shitty mapping out there. Living in the northeast, I’ve learned that the mapping that can be done on California’s wholly modern street structure in Google’s back yard is an anomaly, and places where the streets didn’t start with planned grids are literally hit and miss - and the hits between self-guided cars using this data will be more frequent than the misses.

Most map bases have my house a full tenth of a mile away from actual location. It is not unusual to drive through segments where the GPS shows the road over there and is screaming at you because you seem to be driving through a pond, along a river or down a cliff. It is not too rare to get into a tangled loop of instructions that looks like a side of spaghetti dropped on your nominally straight route. And this is the oldest and one of the most populous regions of the country, not California where they can barely keep up with the sprawl.

So Google’s self-driving cars are like a lot of lab and shareware stuff: they work great where they were invented, and under the ideal conditions the developers are familar with. Too bad such products eventually have to work in the real world. Just like flying cars that would work perfectly in some controlled alternate universe…

Much like when we execute prisoners we do it in a way that no one knows who fired the lethal round or what button activated the lethal injection, we may need some way of providing a random decision where there is no objective accountability and no clear choice what action to take.

It is if accidents due to driving errors are significantly less common than errors due to driving under the influence. Also, the fact that the mapping infrastructure isn’t complete now is not an argument against self-driving cars being deployed in the future. Yeah, it works better in California, because that’s where Google is, and that’s where most of the investment money is, but there’s nothing inherent to California’s road system that makes it mappable, but makes mapping Alabama’s roads impossible.

I suspect that there will be a lot fewer deaths due to AI glitches than due to drunks. Plus smart cars can get smarter. Drunks don’t. (Though at least fewer people drive drunk today than used to.)

I’ve lived all over the country, and the pre-Google maps of roads in the middle of Louisiana or the middle of Illinois were pretty good. Even in California roads snaking west to east on the way to Ft. Bragg are hardly grid-like.
Maps are hardly perfect now. The car doesn’t work only on GPS or maps alone, but pays attention to where it is going. You’ll need manual driving when you get to private roads though, at least for now. By the time the cars are ready for sale things will be much better. I know they’ve developed ones without steering wheels, but I suspect those won’t sell, to start with at least.

I don’t know where they’ve driven the cars, but an article about them had them go off the grid to someplace rural. I’d suspect they’ve taken them up 17 to see how they do on those twisty mountain roads. Our roads don’t all look like the ones in Mountain View.

And it wouldn’t have to try to remember that class in driver’s ed. But here is a dilemma. Say you are about to zip through an intersection on the green with a car right behind you moving fast. A car runs the red light. Do you hit the red light runner, or do you slam on the brakes and get rear-ended by the car behind you?

Slam on the brakes. It’s not my fault if the jackass behind me hasn’t left enough room to stop.

Well, it’s not his fault that the jackass who ran the light decided to run the light, either. Getting rear-ended might end up with you hitting the guy running the light, anyway.

Sometimes there might not be any way to “win”. In Google’s case, they’ve declared that they’re responsible for what their car does. If that holds weight, driving could be weird in the near future while drivers and driverless cars mix and mingle.

So, we wouldn’t even know which AI to blame? Great.

I’m not knocking the concept. It’s just one of those things stuck in the gosh-wow phase, with seven heads of reality lurking around the next corner waiting to eat it. Making it “work” in a trivial, highly controlled fashion isn’t as big a leap towards real-world implementation as the fans make it sound.

I could see driverless cars working very well. On exclusive driverless highways. Where every vehicle is manipulated by computer. Expensive, but one day it could be the norm. Possibly safer, too.

I’d find it so. If the trade-off was 1000 collisions due to drunkeness vs. the possibility of an AI making a split-second less-than-optimal decision… sure, I could that being worthwhile.

Belatedly, it occurs to me that one could use AI to determine if the driver was drunk and disable the car accordingly, which would offer significant benefit without delving into self-driving problems.