The problem is distinguishing between something that is a non-hazard, such as a bag or a tumbleweed, and something that may not quite look like a person or animal but could be a hazard (someone in a floppy coat, say, or someone carrying bags of groceries and walking oddly). It seems to us like this should be really easy because we can make those distinctions with ease, and as I noted earlier, if you see someone in the median you can slow down or prepare to evade, but despite calling the technology behind autonmous pilot systems “artificial intelligence”, computers are not smart; they follow algorithms and recognize hazards by fitment to a certain class of shape or rhythm of movement. The precautionary thing to do would be to have vehicles just drive slow and avoid any perceived hazards at all, but if you have an autonomous vehicle that stops every time a leaf blows across the sensor’s viewfield or that cannot distinguish patterns in rain, you have a pretty weak technology. We’ve spend millions of years evolving to recognize arbitrary objects and anticipate movement; we’ve spend a few decades working on the really tough problem of making machine vision more than crudely reliable, and it doesn’t work any way like our brains do, so we can’t even use the neuroscience of vision as much of a guide in improving it.
The problem of liability for autonomous vehicles is a conundrum, and one that could leave any potential litigation in a Mexican standoff between injured party, vehicle manufacturer, and whomever owns and operates the vehicle, which is another reason for strong standards of testing that could help indemnify manufacturers and operators and distinguish between vehicle pilot error and a truly unavoidable or pedestrian-caused accident.
They’ve released the video from the accident. The woman appears pretty suddenly in the lights, but it doesn’t look like she darted in front of the car. I’m very surprised the car couldn’t detect her sooner.
And it appears the human “operator” was occupied by her phone or something in her lap.
ETA: the video stops right before the woman is actually hit, but use your discretion if you think it will be disturbing to watch.
This is a shocking failure by the Uber car. Lighting doesn’t matter to LIDAR. It should have detected the pedestrian well before the collision and avoided her with ease.
Agreed. The woman crossed over three lanes of open road before being struck in the right-most lane. She wasn’t hiding behind a bush until the last moment, she had to be in plain view to LIDAR for 5-10 seconds before impact.
Assuming the car only had visible-spectrum sensor, that looks for all the world like what I have heard called “outdriving one’s headlights”. And if it does not have something like IR or some other kind of suitable proximity detectors, why not?
Uber AVs are reportedly equipped with both LIDAR and radar. I’d think it either has to be a hardware failure of both (unlikely), or an extremely poor algorithm for recognizing obstacles.
Even if all it had was visible-light cameras, it still had about a half second to begin braking - plenty of time for an AV. But it didn’t slow down at all until impact.
What? No, that’s the easy part. I, an amateur working alone with substandard tools in my free time, wrote software to do something similar, back in the 90s. You’re assuming that the computer would take the written test in the same way that a human is supposed to, while I’m assuming that the computer would take the test in the same way that humans actually take the test: By getting coaching on just what questions are likely to appear on it, and learning to recognize each of those questions and its correct answer. That’s easier for computers to do than for humans.
Alternately, if we want to actually demonstrate that the car has the knowledge that the test is supposed to test for, that’s doable, too, and again more easily than it is for a human: You just check the programming to see if knowledge of all of the relevant traffic laws is implemented.
But the real meat of the test is the practical portion, where you get behind the wheel and have to demonstrate actual driving. That’s the part that humans stress out over, and that’s the part that’s hard for computers. And so that’s the part that, if the computers actually can pass it, they should be allowed on the road just like anyone else who can pass it.
Another thing to keep in mind is that the dynamic range of that dashcam is significantly narrower than that of the human eye. It’s very unlikely that the darkness in the video would have appeared that inky black to a human driver. I was prepared to give the car the benefit of the doubt, but my current assessment is that this is a shocking failure of the driving AI. The “safety driver” wasn’t paying much attention and probably could have at least mitigated things if alert (though I fully agree with Stranger that expecting alertness from people in such situations is unrealistic.)
Pedestrian technically at fault for jaywalking, Uber 100% to blame for turning that into a fatality.
Agreed. I was kind of swayed by the sheriffs statement previously and assumed I was going to see a person take one step from the median
out in front of a car and be immediately hit. After seeing the video I’m pretty shocked at seeing the seemingly complete non-response by the system (although not surprised in the slightest that there was an incident, just shocked by the magnitude of non-detection).
Certainly a poor showing by the self-driving system. And not a great showing for the human, either–however, it appears there was only 1.4 s between the first visible parts of the pedestrian and impact. You can’t really expect humans under typical conditions to react faster than about 1.5 s. That sounds absurdly slow, and it is, but humans first have to see the obstacle, figure out what it is, form some hypotheses for dealing with it, choose a plan, and then execute it. Each step takes hundreds of milliseconds.
Obviously Stranger is correct in the claim that you can’t expect a human to stay alert in a situation like this. Nevertheless, it seems the collision would have happened anyway. Even an extremely alert person would take >0.5 s, leaving less than a second to slow the car.
Maybe, but it will probably take a long time to figure out. There are standards for what kind of fatal negligence gets a human driver put behind bars but they’ve been developed over a long time, the facts are usually pretty simple, and the events are (sadly) frequent so there are a lot of test cases.
LIDAR is a scanning technology. If the laser isn’t pointing in the direction of an object, it isn’t “seen”. Also, mapping an object against a backdrop like a hedge row would complicate object detection… is the bump sticking out a person, or just a bush?
Well, I think that’s debatable. If people demonstrate driving abilities, they should be able to obtain a driving license because the law grants this right to human beings. On the other hand, a tool has no rights. Even if a computerized machine passes the tests, a board of specialists, people as a whole or their delegates may still oppose the widespread use of that specific machine because ultimately safety and utility ought to be considered carefully especially in such sensitive areas as road traffic, where every 12 minutes a person dies in a crash within the US. If people were banned to drive, all these fatalities would cease, but that situation is impossible not only because in general there are no alternatives to personal cars but also due to people’s rights to obtain a driving license and drive. A tool can’t impose its use even when its specifications turn out to be beyond stellar. Tools are just a means to an end, and people can choose freely which tool suits them best.
1.4s between the first visible parts of the pedestrian on the dashcam. I seriously doubt the human eye would have underexposed the scene to that extent. With all the streetlamps around, there’s no way that road was as inky black as it appears to be in the video. I strongly suspect the pedestrian would have been visible to an attentive driver from the moment she stepped off the curb into the left lane.
current model LIDAR units apparently give a continous 360-degree view. Moreover, in the present case, the victim was in the middle of an otherwise-empty road, with no objects anywhere near her that might have obfuscated her identity as a person. **TroutMan says Uber cars are fitted with LIDAR, so something here didn’t work the way it was designed to. The car didn’t even appear to brake (I saw no front-end dive before impact) even when the victim was clearly visible on camera, so it’s apparent the car did not recognize at all that there was something directly in its path.
I’m bothered also by the inattentiveness of the car’s human occupant. That’ll be OK when the bugs are all worked out and you’re an end user sitting in a mass-produced AV, but this was a prototype being tested. I could understand somewhat less vigilant than an actual driver, but his eyes were locked onto the cell phone in his lap. If his eyes were up, he might have been able to at least slow down a little before impact, turning it from a fatality into a hospital stay.
I don’t think that you need to recognize that it is a stack of luggage. Anything not moving can be given a low risk profile, whether it’s a tree or a person. Anything moving away from the road can be given an even lower risk profile. Anything moving parallel to the road can get a moderate risk profile, and anything moving towards the road the highest.
I am not an expert, I admit that, but I think a computer should be able to calculate ballistics of objects, and come to a result on whether or not the course of any given object will intersect (or come close) to the car.
In NJ, the test is done on a course and takes about 5 minutes. There are no obstacles in the way – just drive a little, make a turn, make a K-turn (3-point U-turn), parallel park, make another turn, stop at a stop sign, and you’re done. It would be fairly easy (for certain values of easy) to design a car that passes that test that would be absolutely deadly on a real road. The test is easy because it assumes that people will stop for objects in the road, won’t drive off cliffs, into walls, etc. The test assumes a human, with all the fears and instincts obtained through a few million years of evolution, will be behind the wheel.
Do I think it’s a good enough test for human drivers? No, there are terrible drivers that can pass that test. Is it good enough for AI drivers? Not even close.
True, but the default should be (and I hope that it is) that if the computer can’t tell if a situation is safe or unsafe, it should stop and ask for human guidance. It shouldn’t keep going, on the assumption that the object might probably just be a leaf, a bit of trash, or a puff of diesel exhaust.
I would imagine the computer concludes it is safe to proceed, even in those situations when it is not. The question is, should we demand 100% accuracy from driverless cars?
100%? That’s impossible. However, I have always assumed that the ultimate goal is to get the automation to a safety level that is better than that of humans. We are not there, yet. And until we are, we should endevour to “play it safe” with the software’s safety protocols.