An Uber self-driving car killed a pedestrian in Tempe, AZ

Story here:

Had to happen sometime.

So much for the “human safety driver” safety net.

The obvious question is what value does the human safety driver provide? By the time the human driver realizes the autonomous driver isn’t going to react it is too late.

A human ‘backup driver’ with nothing to do but sit at the wheel will naturally lose attentional focus, probably spending their time reading or looking at their phone. This isn’t a failure of the human driver to act; it is a natural consequence of boredom in the human affective system; without constant engagement, no one can pay attention to their surroundings for more than a couple of minutes at most.

Even if this turns out to have been the ostensible fault of the pedestrian in violating a crossing, self-driving vehicles should be capable of responding to and avoiding the vast majority of accidents by dint of uninterrupted attention and sensor coverage unimpeded by the limits of forward vision. Hopefully this will be the impetus for the US Department of Transportation to develop and apply some standard of verification testing to supposed “self-driving” automobiles. With the right kind of requirements autonomously piloted vehicles could easily reduce pedestrian and vehicle accidents—particularly those due to inattention, negligence, and intoxication—by a couple of orders of magnitudes, but private corporations have no incentive to spend the time and money to do so except to avoid liability, and corporations have been notoriously short-sighted in that area from time immemorial.


Then what exactly is their purpose? If they just wanted a passenger, a test dummy, a mannequin, or a simple bag of sand could perform that function just fine. The only reason to have a “human safety driver” is so that they can safely drive the vehicle in case the computer cannot. If they choose to spend that time reading or looking at their phone, and not paying attention to the road, the vehicle, and their surroundings, I would submit that person is blatantly neglecting their duty.

Not enough to prevent the car from hitting someone who crosses the street outside the crosswalk, obviously.

Then again, it’s not clear who could do anything in that particular situation, short of a combination of Scotty and MacGyver capable of designing an inertial dampener capable of locally modifying Newton’s Laws of Motion and immediately building it out of odds and ends found in the glove compartment.

Scapegoat for when the tech fails to not kill pedestrians?

Agreed–and yet I’m also wondering how the fatality rate of autonomous vehicles compares to the fatality rate of human-driven vehicles, over a similar number of hours and under similar conditions.

On would expect an autonomous car to see a pedestrian no matter where they were crossing. However, if there were not time to physically stop the car once the pedestrian appeared, one can blame only the pedestrian. A human driver would have done no better. NYT did not report enough detail to determine that for this case, although the NHTS board’s investigation will probably do so.

Do those cars record and save the camera data?

The family may want to check her cell phone - Uber may have charged her for a lift.

I really don’t think Uber would charge her for a Lyft (unless perhaps it was trying to establish an alibi).

I agree, you first have to know how avoidable the accident was to pass any judgement on the autonomous system.

And has been discussed many times, the requirement for the autonomous system to be some given great degree more capable than a human (or typical human, or best human) is a matter of subjective public perception. There’s no really logical reason it has to be any particular amount better if it’s better. One judgment call is have regulation require it’s as much better as physically possible w/o regard to cost, but that’s one opinion, not the single right answer.

Here we have no real idea how it performed relative to a human.

But the ‘human safety drivers’ are also there for a combination of public perception, and to prevent gross and repeated errors. Of course the human won’t be able to react in time after waiting and seeing if the system reacts properly to a situation a human could only barely act fast enough to deal without waiting (or quite possibly a situation no human could have reacted to in time even without waiting). The human can take over if it’s apparent the system is malfunctioning in a major or repeated way. Until it’s shown that virtually never happens, then no human safety drivers. The idea is obviously not anyway to have those ‘safety drivers’ forever.

I’m not a self driving car fanatic BTW. I think it’s overhyped relative the barriers caused by many people’s natural tendency to distrust technology and overestimate themselves, plus political agenda’s (of left and right populists, though they have trouble admitting they agree with one another even when they do) related to the job market and social effects. The reaction to incidents like this will tend to illustrate it I think, even if the full facts don’t indicate a human could have avoided the accident.

Really, this. It is certainly not unheard of for pedestrians to be found at fault for getting themselves run over. Without video or at least witnesses, there isn’t reason to doubt a computer system that’s been doing well otherwise.

Imagine that your duty is to sit and watch a CNC mill operate for eight hours a day, waiting for the milling head to show wear or the tool to get out of alignmnent and then immediately stop it. It is an impossible task because no person can pay attention to a repetitive operation for 8 hours across days or weeks on end until something finally goes wrong, hence why we build modern CNC mills with self-diagnostic capability that automatically monitors when the machine goes out of tolerance or malfunctions. Blaming human beings for having the human fallibility of inattentiveness is like complaining that rain is wet.

The purpose of having a human ‘safety driver’ in an autonomously piloted vehicle is, as far as I can tell, one of two things; to operate the car manually after the piloting system engages a failsafe and prompts for driver interrupt, or as ‘safety theater’ to give the appearance that there is a backup system to the autopilot, even though in the case of an impending accident there is probably not enough time for a human driver to recognize a problem and avert it even if they are paying attention.


The video of the aftermath shows a crumpled bicycle, which was presumably being pushed by the person crossing.

From what I can see, it looks like it wasn’t a blind corner or anything. The field of vision seems clear for some distance, with no objects blocking the line of sight. Night driving may be a problem for the vehicle’s cameras.

Serious question - how good are these autonomous vehicles at side scanning vs. just looking up the road; in seeing & reacting to objects moving perpendicular to them. As a human, I can catch movement out of the corner of my eye & see a runner/cyclist/vehicle/deer on a collision course with me while it’s still 3 (or more) lanes away from my vehicle. I can process this & then take mitigating action(s) slow down, speed up, move over a lane, either towards or away based on whether I want to pass in front of or behind the crossing object.

As stated above, we don’t have enough details of this accident to speculate on any more, but just curious.

I haven’t misunderstood your analogy, I just don’t think it’s the same thing. This is not “sit in a chair and watch paint dry, and take notes!”

A “human safety driver” I would think is more like taking a dog out in public. The dog can operate independently of a human, but the human in charge of it is still responsible for whatever the dog does. If you let your dog out to run around the neighborhood, and instead of watching it you choose to read a book or stare at your phone and the dog hurts or kills someone, or itself, that’s your fault.

Note I’m not saying there’s necessarily anything the human driver could have done to avoid this particular incident; we clearly don’t have enough information to determine that. My issue with your post was the seeming hand-waving away of the responsibilities of the “human safety driver” because their job is boring.

Another question, raised indirectly by the article:

Let’s say that testing in lab conditions has reached a plateau. It’s hard to make progress except via testing in real-world conditions.

Let’s say, furthermore, that in the first year of testing, autonomous vehicles will cause 10% more pedestrian deaths than human vehicles. Every year of testing for the next twenty years, they’ll cause 2% fewer pedestrian deaths, cumulative, than this first year (so 8%, then 6%, then 4%, etc.) After five years, they’re on par with human drivers; after 20 years, they’re 30% safer than human drivers.

Is it worth it?

This is the important question. It is unrealistic to expect driverless cars to have no fatalities. So long as their record is better than human drivers, it means they are responsible for a net decrease in traffic fatalities.

From that article:*Companies have not been required by the federal government to prove that their robotic driving systems are safe. “We’re not holding them to any standards right now,” Cummings said, arguing that the National Highway Traffic Safety Administration should provide real supervision.

Federal transportation officials have relied on voluntary safety reporting to oversee the burgeoning industry, which has emphasized the life-saving potential of the technology in arguing against government mandates.

Arizona has aggressively courted driverless tech firms, based largely on its light regulatory touch. That approach has consequences, Cummings said. “If you’re going take that first step out, then you’re also going to be [the] first entity to have to suffer these kinds of issues,” she said.*

In theory, an automated driving system should be able to observe moment and anticipate a potential hazard much faster than a human. In reality, despite all of the press given to the supposedly amazing advances in adaptive machine intelligence and heuristics, the mammalian brain has been evolving for tens of millions of years (and pre-mammalian brains for a few millions years before that) specifically to optimize for predictive analytics in response to external sensory information, and despite close to a century of modern neuroscience we don’t really understand how that works. The brain often reacts to events before a full sensory picture could even be transmitted, and in absence of complete sensory data the brain will synthesize information on a sub-neocortical level in the affective system below the ‘conscious mind’ to produce what we think of as an ‘instinctual’ response long before we can puzzle out what to do.

I’m not going to get drawn into an argument about fault; I’m just pointing out that it is well understood across disciplines that deal with human attentional capabilities from cognitive science and neurodynamics to human-machine interface design and workload management that not providing enough engagement and stimulation to a human operator means that their attention will involentarily wander regardless of their intention to pay attention. This is not like walking a dog, which is a deliberate and conscious activity one engages in for a limited period of time with an active stimulus (in the case of an unruly or challenging canine); it is more akin to watching a goldfish bowl and waiting for a suicidal fish to make a jump for the surface.

One mechanism of the affective mind (or MindBrain as the late neuroscientist Jaak Panksepp referred to the system) is the SEEKING system which constantly looks for novelty and produces it internally when there is a lack of external stimulation, e.g. daydreaming. Expecting someone to constantly watch an autonomous piloting system waiting for it to make a rare mistake is like employing someone to sit on a bus and watch the driver, waiting for him to make a mistake and then leap forward and correct it before the bus crashes into something, which would be such a pointless job that we do not employ people currently for that purpose. The benefit of an autonomous driving system is that it does not have this affective impulse (or indeed, any affect at all) which may distract it from paying attention to driving. However, such systems are not yet mature enough to make fully predictive estimates of adverse pedestrian or vehicle movements, which is an area in which the technology needs to improve before it is ready for a large scale public rollout, and puts lie to the bombast that we fully autonomous vehicles will largely populate the roads in the ridiculously short timeframe of a few years.