In terms of automation under normal operating conditions, flying is simple and features large margins for error, and driving is extremely complicated and typically features very narrow margins for error.
The autoland feature on commercial aircraft sounds impressive, with its ability to safely bring a widebody aircraft down on a runway in zero-visibility conditions with no aircrew intervention…but it’s really not. It’s flying a straight line, guided by an ILS beam and a radar altimeter. It’s not trying to “see” the flying environment, and it doesn’t have to interpret complex 3-dimensional visual/radar data to identify the positions, speeds, and intents of mobile and stationary objects. It doesn’t have to deal with cross traffic, pedestrians, animals, potholes, curves, road construction, questionable traction, distracted drivers in other vehicles, disabled vehicles in the middle of the road, traffic lights that may or may not be working, half-worn painted lane markings (or lane markings completely obscured by snow/ice), or road debris. And as already noted, even on auto-piloted aircraft there’s still a meat-based pilot keeping an eye on things, ready to intervene if the autopilot starts behaving erratically, and problems are likely to be identified while there’s still adequate time for successful pilot intervention. Autonomous-car advocates are propelling us toward SAE level 4 or level 5 automation, in which vehicle occupants are not expected to maintain any kind of vigilance.
Incidents like this one in which an Uber test car struck/killed a pedestrian are the reason I’m wary of riding in an autonomous vehicle. You get lulled into complacency and then a situation comes up that requires human intervention, but margins are so tight that by the time a meat-based driver recognizes that there’s a problem coming up that the autodriver isn’t dealing with (e.g. we’re about to hit a pedestrian, or we’re about to submarine under a semi trailer at 60 MPH), there’s no longer enough time to act.
In theory, if the average autonomous vehicle has a better performance record than the average meat-based driver, then widespread implementation ought to reduce overall motor vehicle fatalities and injuries. I used to know a person who was a below-average driver; I rode with her a couple of times, and made it a point to avoid doing so after that because it was damn scary. She (and the people around her) would probably be safer if she used such a vehicle.
Me? Pretty sure I’m significantly better than average. My safety and the safety of those around me would suffer if I surrendered control to an autonomous car that is only slight better than the average driver.
Most drivers believe they are better than average, even if it’s demonstrably not true. Such people will not want to surrender control to an autonomous car that is only slightly better than the average driver.
If people are going to trust autonomous cars enough for widespread adoption, they’re going to have to be really amazingly astonishingly freakishly good. The only accidents that are likely to be forgiven are the kind where no human could ever have possibly avoided the same accident, e.g. another car popped out of a blind side street at speed and it was physically impossible for your car to avoid a collision. As long as autonomous cars continue to regularly have the kinds of accidents that shitty drivers have - like mowing down pedestrians in plain view on wide boulevards with no other traffic - they won’t be trusted.