This part was left out of the above ntsb report quote (the immediately following sentences):
The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.
So … huh?
This part was left out of the above ntsb report quote (the immediately following sentences):
The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.
So … huh?
As LSLGuy (and has anyone seen him recently?) points out, they discovered from the airline industry that people are really bad at going from “monitoring” a situation to suddenly needing to step in and handle an emergency within seconds or less than a second.
It really sounds like Uber was pushing the limits on safety, not taking enough into consideration on what would be required to prevent accidents. Since Uber’s objective is to create a driverless car, I wouldn’t be surprised if developing a driver interface for danger simply wasn’t given much thought. While that would help prevent accidents, it would detract from mission of getting the driverless car completed as quickly as possible.
This is a perfect example. The car could pick up the bicycle from six seconds prior to the accident, but they had no routine for notifying the driver that the computer was experiencing uncertainty.
IMHO, this is gross negligence of the part of Uber. Relying humans to react appropriately and instantaneously in an emergency simply ignores what is well documented about human nature.
Then it’s not ready for road testing yet.
This.
There are lots of things the car can do with uncertain information other than “plow on ahead” and “stop now”. For instance, it could alert the driver to a possible situation. It could slow a bit. Either of those would likely have worked in this case.
The driver needs to stay informed of conditions and potential hazards on the road. I’m not sure how that can be implemented.
Perhaps a graded scale of warning lights?
The driver can’t just sit there daydreaming while the car drives itself. There’s too many times human intervention is needed.
An article on this in the Washington Post this morning says as much:
But that is not the goal of self-driving cars. Uber, and others, want to develop 100% driverless cars. There is the big profit motive (eliminating the expense of having drivers), but there is also the utopian ideal of automobiles that collaborate seamlessly with each other to the point that traffic lights become unnecessary and vehicles flow smoothly through each other.
The driver is a temporary stop-gap. Manual controls are supposed to be superfluous appendages that will eventually go away. And no one goes up Filbert Canyon Road because, well, that is the windy rutted washboard track leading to icky nature stuff.
Isn’t this the kind of “self-driving” car that people are expecting?
Won’t be anytime soon.
And let’s face it: if there’s nothing for the “driver” to do the vast majority of the time, they just aren’t going to be very alert for the occasional danger. That just goes against human nature.
I hope I’m right to read implicit and severe criticism in those technically neutral descriptions of what happened.
I do actually think we could have a much better and safer future with truly driverless cars (and I see no reason it would make people less interested in seeing natural landscapes), but we can’t put so much importance on a smooth ride. Unbelievable that they didn’t see the risk in taking that approach.
Ahem.
This. In fifty years I have driven over a million miles but I can guarantee you if I’d been “monitoring” for a year instead of driving I’d be really slow if an emergency came up. Now picture kids who’ve driven just enough to get a license before starting their monitoring career and due to improvements, it’s been five years since they had to intervene. They would at best, be useless.
So, as absolutely everyone predicted, the human “backup driver” was completely inadequate to the task of babysitting a computer. So what are the consequences for Uber’s ability to test these autonomous vehicles now that we have solid evidence that their safety measures don’t work?
Enjoy,
Steven
Totally agree with your statement, but am going to pile onto it. Even if the backup driver was paying complete & total attention in the seconds leading up to the accident, by the time they realize, “Oh shit, the ‘carputer’ isn’t going to do anything !” & react by braking &/or swerving it’s probably too late to avoid the accident.
And it’s also fair to note that what the pedestrian did was really dangerous. It might be good to treat roads (outside congested areas, as here) more like train tracks.
Uber is planning to re-start its self-driving program in Pennsylvania, now with two drivers in each car:
Perhaps they should have self-driving busses with 40 drivers. :rolleyes:
LOL
Uber is off the hook. They’ve already reached a settlement with the family of the victim.
So, this gets swept under the rug and Uber moves forward?
Has anything changed? Is their testing any safer?
https://www.google.com/amp/s/www.washingtonpost.com/amphtml/news/dr-gridlock/wp/2018/03/29/uber-reaches-settlement-with-family-of-victim-killed-after-being-struck-by-one-of-its-self-driving-vehicles/
Updating this thread:
" An Arizona grand jury has indicted Rafaela Vasquez, a former safety driver in Uber’s self-driving car project, for the 2018 death of pedestrian Elaine Herzberg in Tempe, Arizona. Prosecutors decided not to charge Uber criminally last year."
Thanks for the update PastTense; this should be an interesting trial.