self driving car meets menacing gang

Probably wouldn’t even have to do that (though I know you weren’t serious). I saw a demo a year or so ago of one way bullet proof glass that was only 3 layers and was proof against even rifle caliber rounds (well, up to say 7.62…you are fucked if they have something heavier than that). You could program the driver AI to basically go into Get Out Of Dodge™ mode if shots are fired at the car, and to take it immediately to the nearest police station if this goes into effect (neatly side stepping the possibility it’s the police firing at you because you are using the car for a get away), notifying both the NOC and the police that the incident has happened.

Of course, I’m sure we can come up with other improbable one off scenarios that would end up getting the passenger killed in some small probability event. What happens, say, if aliens invade…how will the car handle THAT, I wonders…

Would you trust your safety to any customer service department you have interacted with? What if it was outsourced far away and some network lag issues delayed the evasive measures? A simple face mask would ensure the perpetrator would face no consequences, despite cameras.

Then they set the car fire. Or make threats to, until the passengers leave the vehicle.

As much as I trust my safety to my fellow drivers, a large percentage of who seem to be idiots, when they aren’t drunk or impaired by their cell phones. As for lag, I don’t see that as an issue…you are thinking in old school terms, and probably about less public safety oriented products. You can have your call center in India or Pakistan or whatever when it’s for a relatively unimportant product, but for public safety you are going to have a call center closer to home. Also, you will almost certainly have a higher level AI involved as well as local fire, rescue and police if shots are fired.

As for a face mask, well, that would work today as well, so I don’t see the issue. Today if someone did what you are positing there would be no real consequences for the perpetrator, assuming they left zero forensic evidence at the crime scene. So, shit happens and this would be no different. I, again, am not seeing the issue to be honest.

Today you could run the masked chain saw wielder over in self defense.

Dare I say it? An onboard AK-47 would neutralize most of these issues.

:wink:

In the future you won’t need to, as the AI will rightfully figure out that chainsaws are notably ineffective against cars or passengers. :stuck_out_tongue: More broadly, you could program the vehicle with some evasion capabilities as well as networking capabilities so that if such a situation comes up authorities are contacted and an incident is sent up to probably human attention to make and empower decisions. I don’t know how often you think this could potentially happen, but this isn’t going to be something that will happen so frequently that it overwhelms the local NOCs, at least not outside of a Hollywood movie (a really bad one, say from the 80’s).

The self driving car will use its sensors to identify all the members of the crowd, access their various social media profiles and other databases, then use advanced predictive analytics to identify which ones to run over, based on earnings, contributions to society and potential legal repercussions.

IMO there will come a time (or more specifically a court ruling) where that would work as well as showing a CCTV camera footage of a bartender handing back car keys to the person about to walk out of his bar after being served 12 beers.

There’s always going to be an override, but it won’t necessarily be an override operated by the occupants of the car, since in the Brave New World of self-driving cars it is entirely possible that none of the occupants of the car will be able to drive.

Forget marauding hordes, and think of a less dramatic, and hopefully more frequent, problems. Say you have a two-lane road with a continuous white line down the middle, so crossing into the other side of the road is forbidden. But ahead of you a heavy lorry is stopped, and the hydraulic lifting gear is engaged in unloading large amounts of something very heavy. It’ll be there for a while. Or, it’s broken down. The only way to get by is to cross the white line, which is forbidden.

What happens now, in real life, in such a situation? You cross the white line to pass the lorry, that’s what. You do so with care, and possibly you take advantage of hand signals from someone stationed on the other side of the road who has better lines of sight than you do, or is in a position to signal oncoming traffic to stop if necessary. Informal, but it works.

What will a self-driving car do? Either it will let the occupant take over, or (in the Brave New World) it will alert a remote human operator who will assess the situation and may, e.g, take remote control of the car, or simply give the car a one-time authorisation to break the “don’t cross the white line” rule.

The point is, however good the driving software and the data on which it operates, situations can always arise that require human intervention to resolve, and self-driving cars will have to be designed to allow for human intervention. And the human intervention system can be availed of if the car is surrounded by a marauding mob. Even if the occupants can’t take control of the car, they can trigger the intervention system so that a remote operator can.

Huh…that sounds suspiciously like…

Stranger

UDS, why not assume that the computer would cross the white line on its own initiative? Just because it follows programming doesn’t mean that the programming must be Lawful Stupid.

Yes, you just need more modes. An emergency mode which allows for small infractions of the laws where it is not dangerous to do so. A Mad Max mode where its prime purpose is to protect itself and its cargo at the expense of any other life. A walking dead mode where it actively tries to run over as many people as possible en route to destination.

I cannot envision any possible problems with that.

Because I think the self-driving vehicle companies can spot the obvious liability problem with programming cars to break the law. Regulators might spot it also.

They already program them to break the law.

BTW, is it actually against the law to cross a white line to carefully drive around an obstacle? Certainly in this country (Australia) it is lawful to cross unbroken lines for a number of reasons.

You know, once a large number of vehicles are SDCs, this problem of road marauders would shrink.

Not because of each individual encounter going bad for the marauders, but because they can’t possibly get away with it. Those SDCs will have very high resolution cameras and vast SSDs full of storage. Calling the police or the car getting damaged may in fact cause the SDC to try to unload it’s data via 4G as rapidly as possible, so even if the marauders knew where the drives were, they are probably buried under the driver’s seat under the floor and would take tools and a lift to even access.

And that doesn’t even matter. Other passing SDCs would get a clear image of the attackers, serving as autonomous witnesses. The authorities would be able to subpoena the owners of every SDC that was on that section of road at the time the crime occurred and basically recover video containing an overwhelming amount of evidence proving what happened.

Once the SDC concentration as high enough, it would also not be possible for criminals to escape, as even if they try to run away on foot, they are always in view of a street. If they hide in a building they have to come out later. Eventually they’ll probably add face recognition subsystems that can actually identify specific individuals, uploading a hash of each face they see to the cloud in real time. (hashes are tiny in data terms and wouldn’t break the bank on bandwidth like video or images would)

My point is that if our hypothetical victims aren’t armed (and everyone toting guns around causes problems of it’s own, as has been discussed ad nauseum in other threads), the fact that their attackers will almost never get away with it (and the criminals would know they were likely to be immediately caught) would make this crime almost never happen.

I mean, look at it this way. Do you expect subway trains to run over possible hijackers who try to block them? What if the hijackers put a barricade in the tracks, what then? Most subway trains aren’t automated, but exactly when would you expect a train to ignore a track obstruction sensor?

As an engineer, I’d say the answer is “never”. If an obstruction is detected, you emergency brake. Always. Use triple (or more) redundant sensors to prevent false trips.

Sure, bad guyscould sneak into a subway tunnel, sabotage a sensor so the train detects an obstruction, then once the train stops they could rob or kill everyone.

But they aren’t going to get much money (especially as most adults today carry no cash and phones are themselves tracking devices and are difficult to steal) and are going to have a tough time escaping. And there’s cameras all over the place, so their odds are dismal of getting away with it.

Reliable engineering is careful thought, careful planning, and trying to make a darn system adhere to a series of simple and clearly understandable principles. (with an autonomous car, that principle is, above all else, minimizing the probability of a collision with an object outside)

And as I have found the last few years, it’s damn hard to make an actual system, no matter how simple it is, reliably stick to the principles you specify, so you definitely don’t want to try to handle absurd edge cases like this OP differently than you handle everything else.

Where I live, it is absolutely against the law. A solid line may never, ever be crossed for any reason.

So your car and my car are never going to be friends.
And your car’s legal department and my car’s legal department are going to face some complicated situations when they stand before the judge in traffic court.

I think you’re missing the point when discussing how any one problem may be solved. That’s not the way AI works. We’re not writing special code to cover every possible set of conditions we can think of. We need the car to be able to respond to numerous situations which we simply cannot foresee, and hope it gets it right based on what it’s learned.

When any specific problem is found you may be able to envision a workaround or a solution. But what if the future is dominated by unknown unknowns, and the cars just won’t work until we solve general intelligence?

Most of the driving that’s been done has been in controlled areas that have been carefully mapped and the cars may even have 3D models of the entire terrain down to the little details. That makes it a lot easier to spot things that shouldn’t be there. The driving they do has been largely done to avoid dangerous situations and unknown roads and other obstacles. And now that these cars are showing up on regular roads, there seem to be some worrying signs that there can arise situations that simply confuse them, and no one could foresee happening.

Imagine something like three autonomous cars from different manufacturers driving in a row on a road and small difference in their programming regarding following distances, acceleration rates or whatever suddenly cause an oscillation, or when many of them get on a road together we start seeing dangerous emergent patterns, like the standing wave that forms in the aftermath of a traffic jam.

Whether this specific outcome happens or not is not the point. The point that I have a suspicion that inserting autonomous vehicles into an already very complex traffic system is not going to go quite as smoothly as you might think, and it might not go at all if it gets off to a rocky start and nervous politicians pull the plug on it, or the legal issues that arise become complicated and very expensive and choke the industry.

How do you deal with roadworks? Say your lane is closed and the area is temporarily controlled to allow cars to use the single remaining lane. Do they paint over the solid lines, or is it sometimes legal to cross them?