Thanks very much.
The problem is that in a complex world there are an infinite number of corner cases. This is why we need an AGI for level 5 driving. If engineers try to identify all the corner cases and code for them, they’ll fail. We need something that will intelligently judge situations as they arise, not rely on former training.
It’s possible that AI drivers will be better in all the ways that cause human accidents, but have their own failure modes that humans don’t have that makes them just as dangerous.
The next level of complexity after the cars can safely drive themselves will be strange system interactions. We won’t even see those until a high enough percentage of self-driving vehicles are on the road. Things like self-driving cars ftom different manufacturers having different types of response to a potential collision and therefore swerve into each other or do something else dangerous. Or a road full of self driving cars doing strange things as they all try to maintain separation. System interactions are the worst kinds of bugs. They only arise sporadically, and always in complex environments where cause and effect are hard to find. These are what give manufacturing engineers bad dreams.
And if we get those straightened out, there are the human/machine interactions to worry about. For example, people might take advantage of an AIs safety choices to force them out of lanes, tailgate them to force them to speed up, etc.
Even if AI drivers are better than humans at driving, they aren’t humans. So the behavioural incentives around them will change. What will road ragers do to self driving cars? How much bad behaviour is suppressed every day because another driver might decide to take issue with you? Look at how differently people behave online compared to in person. How arw they going to treat empty cRs driven by AIs? Or with a blacked out back seat so you can’t tell?
Which brings me to the last point: safety psychology. Traffic engineers can tell you all about it. People seem to have a built-in sense of acceptable risk, and it’s very hard to improve safety statistics. Make the highway divided to reduce head-on collisions, and people will drive faster. Put airbags in the car, and people will drive with more risk.
A modest proposal to improve road safety would therefore be a 12" steel spike in the center of each steering wheel pointing right at the driver’s heart. I guarantee you people will be ultra careful in the way they drive. Of course it would make cars very slow…
The point is that if AI drivers turn out to be very safe, people will probably engage in other risky behaviour on the road. Maybe it will be popular to hack the car to make the AI drive faster or something.
Or maybe the future will be radically different than any of us can imagine. In fact, it’s almost certainly going to be. We’re only a year into the new AI revolution. Predicting what will happen 10 years from now is bleemin’ impossible.