An Uber self-driving car killed a pedestrian in Tempe, AZ

Lidar, radar…

Here’s how Uber’s self-driving cars are supposed to detect pedestrians

This report quotes the president of Velodyne clearly stating that their LIDAR can see perfectly well in the dark.

I don’t know if it’s true, but I’ve heard it said (perhaps here on the SDMB) that the Volvo XC90 car used for this has sufficient collision avoidance technology that it would have stopped in time.

If only there were a thread here somewhere that discussed why this is incorrect.

I don’t disagree with that, but that’s not what I was saying or at least trying to say in my reply.

Arizona has shut down Uber’s test program in that state:

This is the same governor who welcomed Uber with open arms when California refused to tolerate their Wild West mentality.

At the time he also had a sound bite about how California regulations where driving business away.

No surprise. The very definition of LIDAR is that it sends out its own beam of light, and measures the time of flight to/from the target to determine distance. So it has absolutely no reliance on ambient lighting.

Pictures of the Uber car at the scene of the incident clearly show it to be LIDAR-equipped (that big rooftop tumor).

I came across a very interesting comment on a blog, which suggests that the problem with Uber self-driving cars not correctly detecting pedestrians is systemic:

If there is a glitch in the car’s AI that makes it sometimes unpredictably ignore pedestrians, it may be pretty hard to track down and fix.

This is the the problem with self-drive cars using complex neural nets. If you don’t know why it’s intermittently doing something weird, how can you retrain or fix the AI?

This is where an on-board observer who is actually paying attention is important: on those rare occasions when close calls occur (e.g. when the car doesn’t steer a suitably wide margin around pedestrians), the observer can tag the data so the engineers can come back and look at it later to see what did or didn’t happen.

37,461 traffic deaths in 2016, which averages out to 103/day.

Imho a self driving car poses less danger than a very tired or drunk driver at 2AM.

Honestly, the last thing in the world I’d expect to see at 2AM is a woman pushing a bicycle. The roads are deserted and my focus is getting home after a long day and to bed.

This industry faces a big problem.

They have to test in real world situations. These cars can easily earn a perfect score on a test track. There’s no way they can simulate all the real world driving conditions.

Finding a way to do that safely and without some risk, seems impossible.

Ford recalled 1.4 million vehicles recently because the steering wheel can come off. There have been two accidents and one injury because of this.

But hey, why bother? It’s no big deal. Thousands of car accidents happen every day! What’s a couple of small incidents? Just because a steering wheel comes off now and then, is reason to make a fuss. Let’s just ignore it. It can be fixed in next year’s models.
**iPhone batteries have been known to catch fire. **

Why should Apple bother to do anything about this problem? Millions of people use iPhones every day without them catching fire. Geez, people burn their hands all the time in other ways, and houses burn all the time down too. So let’s not even bother finding the cause or fixing it. It’s a just risk you take if you own an iPhone.
Uber self-driving car kills a pedestrian.

Sure! 103 traffic deaths a day, as Czarcasm says, and pedestrians get run down all the time by DUI drivers, as aceplace57 points out.

So it’s just a drop in the ocean! Absolutely no problem then, if Uber cars kill the occasional pedestrian. How come everyone is making such a fuss about it? As long as Uber cars only kill one or two people a week, I’m sure everyone will be fine with it.

:smiley:

That wasn’t the point that I was making. I’m saying that when it comes to traffic deaths focusing on one questionable automated Uber car death and insisting that the system be made perfect before it is put on the road we are ignoring the far greater number of deaths due to human error that have been occurring for many decades. Airlines aren’t perfect and many more have died from plane crashes, yet we don’t insist that all airlines be shut down until they can achieve some mythological “perfection”, neither are we insisting that the rail system be shut down because of the deaths involved. Nobody is “fine” with the death of that bicyclist, but to overreact and insist this next step in transportation be shut down runs counter to the entire history of exploration and advancement.

It comes down to statistics.

Hypothetical
Assume 10 serious accidents per 10,000 self driving cars on the road. Per day

and perhaps there’s 100 serious accidents for 10,000 cars on the road. Per day

Significantly less people will be hurt or killed.

The catch 22 is gathering those statistics. That can’t happen until there’s a lot more self driving cars on the road. There aren’t enough self driving cars to study and compile statistics.

But, every serious accident will make national headlines. People think computers are perfect. Your spreadsheet formulas always calculates correctly.

Cars rely on sensors. There is the potential of error. Just like people blinded by the glare of the sun.

We should always strive towards perfection. Improving the technology and AI in these cars. Making them more and more reliable.

Which self-driving cars are using neural nets?

Sure, no problem, then!

And of course nobody would ever dream of suing the maker of a self-driving car for negligence and damages, just because it killed someone.

In some bizarro world, you might even imagine people suing a coffee chain because their coffee was too hot, but certainly not in real-world America! In America today, self-driving cars will be able to kill people with impunity, without any fear of lawsuits at all!

:wink:

Who the hell are are you responding to? Nobody is saying that it is okay for self-driving cars to “kill people with impunity”. :confused:

In this case, however, there are some pretty clear reasons to believe that Uber is being particularly cavalier about a technology that does not seem ready for open road operation. It is one thing to have an occasional accident that was unavoidable by any practical means; it is quite another to run down a pedestrian, even one who was jaywalking, that even existing driver assist systems could have likely avoided. That the Uber culture and experience tends toward reckless indifference is borderline criminal negligence even before it is revealed just how often human ‘safety’ drivers have to intervene in normal driving scenarios.

Most if not all autonomous vehicle programs intended for open road operation use heurestic methods for “learning” correct responses to road conditions and identifying hazards rather than just having some list or database of rules of how to interpret phenomena. The only effective method that really exists for this kind of heuristics are artificial neural networks which, despite being named after structures in the brain are not really like the organic brain in terms of organization and plasticity.

The problem with self-adapting neural networks is that even though you can inspect the network and the connections it has built in operation, it may not actually give you a lot of insight into how the network is actually making decisions owing to the complexity and apparent arbitary nature of connections that are formed. Here again is the MIT Technology Review article on the difficulties in understand what self-adapting artificial neural networks do.

Stranger