Self-driving cars for bad drivers vs. self-driving cars for good drivers

Overall I agree with your attitude as stated in your various posts. But ref this one little snip …

This is a very common and IMO largely unexamined contention. Every thread we have on autonomous vehicles ("AV"s) includes somebody (usually several somebodies) saying in effect “the car *must *rigidly obey all traffic laws to the letter.” I’m not specifically disagreeing with you here, just using your sentence as a jumping off point.
I don’t think this common contention stands up to actual scrutiny. Laws on liability and laws on traffic rules will have to change some as a precondition to AV use. And they will definitely evolve over time as AV experience grows and stuff happens out there.

On virtually all major streets and certainly on freeways and similar, a bunch of vehicles rigidly adhering to the posted speed limit will be a gigantic obstruction to traffic. And will trigger frustrated and dangerous driving in the manually driven cars around them. As well, they will produce slower journey times for AV users vs. what they could expect if they drove themselves.

All of these things are bad for traffic safety, bad for society, and bad for AV adoption. So they should not be encouraged.

The solution is logically and legally trivial. Banish the legal principle, at last as applied to AVs, that a traffic law violation is *prima facie *evidence of fault in an accident. Which is already more the case in human-driven cars than most humans believe it is. Civil fault is a lot more complicated than “Who got a ticket?”

As well, introduce a “speed of traffic” exemption to the basic speed laws, but applicable to AVs only, not to human-driven cars. IOW make “But Officer, I was just keeping up with the cars around me.” a valid legal principle as applied only to AVs. With the car using sound “judgment” to drive a reasonable compromise between following the horde at 70mph into a snow squall vs. slowing to 20 and then being rear ended by a sensible human driver going 40.
Ultimately a properly set speed limit on a stretch of road today is “What’s the highest number we can post that causes the actual higher speed of traffic net of our puny enforcement efforts to result in an acceptable rate of death and destruction given ordinary human drivers using ordinary care in their ordinarily maintained cars? Plus the occasional total dipstick or drunk?”

As we change any of those input parameters the output number can and should be different. Over time as AVs become more commonplace this can be tweaked too. IMO a plausibly competent AV could safely drive 100 mph on modern freeways provided it was insulated from non-AVs going other speeds.

There is plenty of well-documented evidence that what causes accidents on freeways is differential speed. Not absolute speed.

It’s possible for well more than half of drivers to be above-average as long as they don’t agree on what “above-average” really means. Above average in being able to control a car? Or above-average in being able to avoid an accident? I’m fairly sure that there is somewhat of a negative correlation there. I’m not sure how much that correlation is, but in my experience the rudest and least safe drivers generally are able to do things with their car that I wouldn’t even dream of attempting because I highly doubt I’d be able to squeeze in between traffic like that without hitting something. Is a person able to do that a better driver than someone who doesn’t but knows not to try?

I don’t see the problem here for anyone except early adopters.

Sitting in an autonomous vehicle? That means you’re going to be cut off, as humans realize they can squeeze in.

Will it take you longer to get to work? Probably, but that’s your problem.
Although since you’re relying on the computer to drive, you’re going to be watching a movie instead of the road, so you won’t really notice.

I doubt it. I suspect the various car companies will share the software and technology in order to reduce possible errors and to reduce costs. Remember that for a 100% autonomous car the human will no longer be responsible in the event of a collision since they weren’t in control of the car at the time. The way consumer protection law works any accident caused by the car will be a fault in design and the car maker will be responsible. Your autonomous car will be insured by the manufacturer rather than by the owner so they will have a very strong incentive to ensure that the car has the best possible software running it.

Self-driving cars simply aren’t going to happen unless and until they can fit in with all the people-driven cars, in a way that’s not only at least as safe for the occupants of the self-driving car, but also doesn’t behave in ways that frequently confuse the human drivers it’s interacting with in different ways than other human drivers do.

That’s more of a steep initial hurdle than a wild card, but once they can manage that, the main barrier to their taking over will be that cars last a hell of a long time these days. Because if autonomous vehicles can swim in a sea of people-driven cars without major problems, they’ll be even safer as the people-driven cars gradually go away. And if they can be safe when there’s no other self-driving cars out there to communicate with, then they really won’t need inter-car communication to be safe when self-drivers dominate.

Even though I (as you say) think of myself as an above-average driver, regardless of the truth, I’m also 63 years old. And realistically, about 15-20 years down the, um, road, there will come a point when my reflexes and my brain’s ability to keep up with what’s happening around me will result in my being first a below-average driver, and ultimately a danger to myself and those around me if I keep driving.

So I’m rooting for autonomous vehicles to be a reality by the early 2030s, if not sooner, so that I can continue to go places by car without having to drive, or having to rely on someone driving me places.

I think this is just wrong.

Self-driving cars have their failure modes (the most worrisome of which is hacking, which could cause widespread mayhem everywhere), but in normal operation, they’re likely going to be much better than even very good human drivers, both because they won’t space out and get themselves into a situation where an easily-preventable accident becomes inevitable, and because they’re going to be better at reacting to sudden unexpected issues.

In cases where something happens suddenly, a self-driving car is going to be able to take much better evasive action than a human. Reaction time, accurate vehicle models, and 360-degree view are all going to make a huge difference here. Imagine you’re on the freeway and the front axle of the car in front of you snaps, causing a rapid turn into a flip. Best case, if you’re a really attentive driver, is that you’re going to do some combination of braking and turning. Are you going to turn the right way? Are you going to apply maximum stopping power with the brakes? Well, probably not. You’re going to have a surge of adrenaline that causes you to slam on the brakes and jerk the wheel.

The self driving car is going to know the parameters of itself and use the brakes and steering optimally. It’s going to aim for a spot that’s not going to crash into the other cars that you probably would not have kept in mind when the one in front of you suddenly jackknifes. And it’s going to start doing all that a quarter second before even the most-attentive human driver would.

It’s funny that when people talk about self-driving cars, because they often say things like “I’m fine with it doing the boring freeway driving, but I’d want to be able to take control in the event of an emergency”, which is exactly wrong. Humans are good at boring freeway driving. They’re really bad at split-second evasive maneuvers.

The precedent is fairly strong. Auto-pilot systems are way better at flying planes than human pilots, have been for a decade or two, and human pilots train for years to be good at their jobs.

Self-driving cars might be worse than good human drivers right now, but by the time they’re rolled out, I’d be surprised if self-driving cars were worse than any but the most elite race car drivers. Unless they’re hacked, of course. Then they’re a distributed weapon of mass destruction.

I think the danger of them being hacked is minimal. A single function computer is much easier to secure than a multifunction computer that most of us are used to. The files will be signed by the car manufacturer and no outside programs will be accepted. They also won’t communicate with vehicles that don’t have signed software. Every module will be specifically written for the hardware in your car so bugs and glitches will be minimal as well.

Sent from my SM-G955U using Tapatalk

One thing I anticipate is actually fewer traffic jams since the machines will be much better at managing traffic. No longer will you have the accordion effect on the roads. Particularly at red lights imagine if every car steps on the gas at the same time. How bout if nobody slows down to look at an accident. Spacing between cars can be reduced since reaction times and response are improved. Speed limits can be increased since there are fewer accidents.

Sent from my SM-G955U using Tapatalk

I foresee at some point a government agency saying " all human driven automobiles will phased out of production by 2045 " and then "all manual icensing permits will no longer be issued by or on 1/1/2040 "

which will take care of the human driving problem Note the actual dates will change …

In emergency situations human drivers have a fraction of a second to do something and do it competently. Self-driving cars can hopefully think and adapt faster than humans, like correcting a skid for example. So I agree with you about sudden maneuvers. Hell, a self-driving car could even apply any/all brakes at different levels if need be.

I’m not sure plane auto-pilot systems are a good comparison, because the sky generally has fewer hazards than busy roads do. And how much maneuvering and avoidance do auto-pilot systems do, or do they just follow a pre-programmed route from point A to point B and depend on ATC to keep the planes apart?

In normal driving I think an attentive human can pick up on many subtle things and cues to avoid trouble that a computer probably won’t. For example, is that shiny spot way down the road a reflection, a piece of tin foil, or water? Is the car in the next lane that keep edging over towards me wanting to get into my lane? Better watch out for him. Is that car traveling perpendicular going to run the red light? It looks like he isn’t watching the road. Is that stop sign with snow splattered on it still a stop sign? Obviously yes to a human, maybe unidentifiable to a computer.

The danger of them being hacked is very high, and the consequences could be catastrophic.

Here’s an article about guys who discovered a vulnerability in 1.4 million Chryslers that led to a recall. They could control the throttle, steering, and brakes via the internet.

Imagine that these weren’t security researchers, but people intent on harm. What happens when whatever fraction of 1.4 million late-model vehicles are on the road at rush hour go full throttle, no brakes, then jam the steering to one side at rush hour?

Hundreds of thousands dead? Millions?

Probably worse than a hurricane hitting every city at once, or a reasonably-sized nuclear war.

That’s one security bug in the cars from one maker. Although it doesn’t actually rely on self-driving cars, just on cars with computer controls connected to the internet.

It’s not a perfect comparison, but, per hour of travel, general aviation (ie, not professional pilots, not autopilot) results in about 20 times as many deaths as ground vehicles. Most aviation deaths are attributable to pilot error. So, while it’s true that there’s less to run into in the air, there are more things to go wrong, more ways to fuck it up, and people tend to die a lot more when they’re at the controls.

Again, I realize I’m shooting from the hip with these statistics. If you have better ones, please suggest them. I still think that the evidence suggests a reasonable assumption that that self-driving cars will be an order of magnitude better than the best humans, unless/until they’re hacked and kill us all.

The idea posed by the OP that self driving cars will be better than bad drivers but worse than good human drivers doesn’t seem to have much support.

Umm. Ok. I have no idea what logical basis you are using for a chaotic bouncing tire to be unavoidable or that trajectory prediction algorithms cannot be adaptive.

I could go on about how for each object the car is tracking, it has a measured standard deviation. More chaotic objects, that parameter is going to be higher, but actually, bouncing tires still have the same kinetic energy that any other bouncing object has, so it’s not as chaotic as it sounds.

So what the car’s motion planner already does in even early prototype autonomous cars is consider different paths and choose the path with the lowest (estimated) probability of collision.

You can think of an object with a high standard deviation as having a larger cone of uncertainty around it. So yes, current technology is already capable of accurately accounting for the likely trajectory of a chaotic object and choosing a path that has the lowest total chance of collision.

Here’s a simple explanation of a tracking algorithm, applied in an academic setting. Be aware that these are about 20 years old capabilities, the methods an autonomous car uses are far more sophisticated.

I think what you are thinking is that since the tire’s bouncing is chaotic, there’s a chance that in a situation where the choices available to the car are tightly constrained (cars in other lanes boxing the autonomous vehicle in, etc), there’s a chance it could try to avoid and end up with the tire going through the windshield and killing the driver.

Of course there’s a chance. It just has to be significantly better than humans.

The flaw is that you are assuming that

a. The automakers today are doing the best they can do

b. It isn’t really feasible to make something essentially impossible to hack without physical access to the vehicle.
The answers are no to both questions. Automakers today are using quite old technology that they commonly make a ton of mistakes with. Essentially, CAN bus works on the honor system. Nothing physically stops a poorly secured device wired to a vehicle’s CAN bus from doing all kinds of bad stuff.

But, there are future model year plans to secure the CAN bus by subdividing it into different regions protected by gateways.

In addition, there always have been completely secure computer systems that has never been hacked.

Examples : Nearly all satellites, nuclear missile silos.

Why is this? Different models for security and a lot more care was taken. Essentially, the solution is something to the order of putting a device that has been formally proven in front of any security gateway in the vehicle. Probably that device should be an FPGA, and probably the security keys should use one time pads, as they are the only encryption algorithm proven to be unassailable.

They can certainly do better, and the problem is solvable. I’m not convinced they’re going to do what’s necessary to avoid a wide-scale catastrophe, but I hope very much to be proven wrong.

A Level 5 (fully autonomous) “self-driving” car will have 360 degree situational awareness, response times in the tens of milliseconds, and will not suffer from fatigue, distraction, frustration, or anger. So it already starts from a position of being superior to any human driver in terms of awareness, reaction time, and attention. There is no question that even a basic set of all around sensors will provide a better response than a driver who has to constantly scan between the windshield, rear view mirrors, instrument panel, and text messages on his or her phone, plus whatever other distractions may present a diversion.

The only question that remains is the appropriateness of the response to an input. This will depend on the heuristics of the algorithm that has “learned” how to deal with any individual circumstance, but consider this: as an individual driver, you’ve probably experienced at most a few dozen potentially catastrophic scenarios such as the one proposed in the post #3. An autonomous system can draw from a database of many millions of near collisions that might be collected by actual experience (and a virtually infinite amount given simulated conditions and response) and filter through the appropriate response for any particular scenario. In fact, many of the methods now being invoked to develop fully autonomous driving systems observe the behavior of human drivers and then try to improve upon and optimize it. Given the nature of the problem–essentially, collecting enough data to construct global ‘rules’ covering virtually any plausible hazard or condition–it is really more a matter of time and effort than fundamental development of generalized synthetic cognition systems to get to the point that an autonomous driving system will make better decisions and response faster than the best trained and most highly experienced human driver could hope to do.

Humans will continue to excel in many ways over artificial intelligence or synthetic cognition, but as an aggregate we’re pretty shitty drivers, and even the best of us are limited by the fundamental speed of our neurological system, which is highly evolved over hundreds of millions of years to do some pretty remarkable things but not to operate a moving vehicle at over a hundred kilometers per hour. It’s coming to the time that we need to admit to that limitation.

Stranger

The thing about AVs is that the right answer to 99.9% of real life “what if this happens” scenarios is to Stop Immediately. ‘Stop Immediately’ is a task that AVs will be able to handle very easily, and very safely, particularly when all the vehicles on a road are self driving.

Not necessarily; in fact, the reflexive action of many drivers to “stop immediately” is what generally leads to multi-vehicle pileups. As it happens, I recently avoided a highway accident by observing a situation several vehicles in front of me, taking an evasive swerve onto the right shoulder while the two cars in front of me both impacted, and then saw the car behind me suddenly brake and skid into the car that was previously in front of me. Now, I’m a pretty well trained driver having been through tactical and performance driving courses and my alertness to the cars well in front of me allowed me to avoid rear ending someone but had my attention lapsed for whatever reason I would missed the cues for the split second decision I had to make to avoid the impact, and if I’d just stopped the car behind me would almost certainly have crashed into me, especially since all of this occurred at about 65 mph.

All of which is another reason that fully automated vehicles will be safer and more capable even without intervehicular communication; they will not have lapses in alertness, have a response time far better than any human driver, and can assess and react to situations by picking the best option for collision avoidance in all quarters rather than the reflexive one just in front of a human driver.

Stranger

99.9% of the time hemmed into one lane on an urban street and a pedestrian suddenly runs out in front of you. Not nearly as high a % in highway situations as was just mentioned. In some cases it’s not physically possible to stop, but applying the brakes limits the car’s cornering ability. Thus the right choice a non-negligible % of the time is foot off the gas, max steering, no brakes.

This is why, thinking of the here and now, I like the idea of low speed automated braking systems, but not sold on high speed systems (on BMW’s these are two different systems, different sensors, different options on the car).

But of course you can make it a tautology by speaking in terms of a self driving car that can do anything better than any driver. The issue is when that happens, and the tendency IMO towards hype in the meantime.

I don’t have any figures, so I’m going to make some assumptions. Of all the new cars manufactured in the next ten years, more than half will go into service in a traffic environment where 2-wheeled motor vehicles will still outnumber 4-wheel automobiles. Self-drive cars would be completely useless, and would simply never move, in any city in southeast or south Asia, which is the biggest expansion market for automobiles. In my city of 4-million, motorcycles easily outnumber cars by at least 10:1, probably more like 20:1. There are no traffic lights or stop signs, people just share the road and go when they can. Self drive cars would never understand that concept, and would just shut off in traffic.

Of the next billion cars being sold, a great majority of them will be destined for the expanding economies characterized by such traffic environments.