An Uber self-driving car killed a pedestrian in Tempe, AZ

If this kind of thing keeps happening with self driving cars and they end up being pretty dangerous; I can see them banning them off the road. They were a good concept, but if they prove to be dangerous I wouldn’t mind them getting them off the roads until they fix them.

It would be interesting if the deceased pedestrian turned out to be a vocal critic of autonomous vehicles…

Well, the “facts” available at the time were that the woman was walking her bike across the road, which you chose to disbelieve and go from there. A criminal investigation would wait to view the video before hypothesizing on the cause.

It’s not a biggie; I like speculating as much the next person (and without it, this board would be pretty damn boring). I was just amused by your “let’s wait for more information” comment when you obviously weren’t.

See, this is the kind of speculation that is useful. :slight_smile:

As of two years ago, the track record for autonomous vehicles was already better than for human drivers. It’s surely improved since then, and will continue to improve.

Funny, one autonomous vehicle hits one pedestrian on one day, and people start to think that AVs might be dangerous. But nobody seems to notice the HUNDREDS of pedestrians across the US who are injured or killed by human drivers EVERY SINGLE day. This is like when the first Tesla Model S caught fire (and the driver calmly exited the smoldering vehicle); it got huge press, and few people mentioned the hundreds of fires that happen every single day in gasoline-fueled vehicles.

But we already employ humans to handle situations they aren’t always smart enough to cope with. How much does a machine need to be before they are allowed to do the same tasks?

“Perfect is the enemy of good.”

An investigation needs to occur, if it’s determined that the car should have seen the pedestrian & reacted, then these vehicles need to be banned until changes./updates render them safer. If the video shows the woman was standing on the curb for some time & then stepped into the street a millisecond before the car arrived there, then as good as the car is it can’t overcome the laws of physics to stop in time & that’s a different story.

It’s hard to make conclusive statements about their safety with respect to pedestrians at this point. Autonomous vehicles have logged fewer than 10 million miles on public roads, and there’s now one pedestrian fatality. The national pedestrian fatality rate is around 1 per every 480 million miles traveled.

We obviously can’t draw conclusions from this tiny sample. AVs didn’t suddenly jump from infinitely safer than drivers last week to 50 times more dangerous this week. I agree they are probably already safer than drivers when it comes to avoiding accidents with other cars, but it might be premature to think they are safer in avoiding unexpected events like someone entering the road outside a crosswalk.

We also consider AVs as one entity with a single safety factor, but it would be interesting to compare Uber to Waymo. Waymo strikes me as a bit more methodical in their testing and I wouldn’t be surprised if their cars handle some situations better than Uber’s.

You’re assuming it was an accident. But your evil neighbor has reprogrammed your self driving car to kill those unfriendly to our cause.

Maximum Overdrive! It’s not too late to repent.

“Unexpected” is exactly where AVs do better than people. this AV was smart enough to stop for a duck being chased by a woman in an electric wheelchair. Fast-forward to 11:42, and you’ll see another pedestrian incursion - this time a wayward toddler in a toy car pursued by his father, for whom the AV appropriately stopped short. If the car can see a thing, then it’s tracking that thing, making predictions about what that thing might do, and responding accordingly. Unless there was a hardware failure in the AV that hit the pedestrian early this week, it’s quite likely she popped into view at the last second, and was seen too late for the car to stop. If that’s the case, a human driver wouldn’t have done better.

I was disturbed to read that there is already legislation in the works to approve practical testing of autonomous cars WITHOUT a human backup driver. I think it’s a bit too soon to say the least.

Yes, I agree with those who say that, if the machine makes an error, it’s unlikely that the human will notice, process, and react to it in time to prevent an accident. What if, however, the computer really malfunctions and the car begins to meander all over the place hitting people and things? If there is no human at the ready, do they have a kill switch that can shut it down and stop it and, even if they do, how much mayhem will ensue before the problem is recognized and dealt with?

This is an assumption not supported by fact. And without any objective standard or verification testing, public safety is left to the whims of what a particular manufacturer considers to be ‘good enough’ in terms of accident avoidance. As it turns out, manufacurers tend to be focused on their bottom line and often view potential liability in terms of the bottom line net cost.

As others have noted, the lack of fatal accidents on the part of autonomously piloted vehicles to date is a pretty meaningless statistic given the paucity of data and that for the most part autonomous vehicles have been restricted to driving at slow speeds in clear conditions with strong fail-safe measures in place to avoid any potential for an accident. A Google mapping car driving at 25 mph down residential streets in broad daylight which slows or stops when it detects pedestrians anywhere in its forward view field will avoid situations where an accident is likely, but isn’t very practical for a commuter needing to get to work on time.

Stranger

Well, here’s some new facts.

Police chief: Uber self-driving car “likely” not at fault in fatal crash

Yes and no. AVs can potentially see things better, especially when there’s a lot of data coming in. The example of the cyclist at the red light from the video you linked is a great example of that. But we can’t yet teach machines all the heuristics we use in making decisions. If the “unexpected” event is something we taught the machine, it will likely be better than humans at recognizing it and responding. Otherwise, a human might make a better decision based on experience.

The duck in the road or toddler in a toy car examples aren’t impressive to me. Clearly there is an obstacle in the road, and no complex assessment is needed to decide if avoidance is needed. But what about a homeless woman (as it appears this woman was) at the side of the road? A driver might innately know that such a person is likely to be less aware of their surroundings and therefore more likely to suddenly enter the road, so the driver will show more caution. Is that knowledge taught to the AV? How can an AV be trained to recognize someone who is paying attention to their surroundings versus someone who is not?

As said, we don’t have to get to perfect to be worthwhile. But it’s not as simple saying an AV will always react better than a driver.

When I see someone walking in the median, I slow down, check my opposite side for clearance, and if possible move over under the assumption that they might do something stupid or obtuse, particularly if it is a child or someone who may be impaired. Whether that would be sufficient in this case to prevent the accident we cannot say, but I’ll stand by the position that we need objective standards for verifying that automous driving systems meet at least a minimum criteria for safety performance and accident avoidance. Whether it is imposed from above or (preferably) developed by the people actually working in the field and adopted as an industry standard, without it there will be manufacturers who will try to cut corners to be early to market at the expense of lives and suffering. We have such standards in aviation, civil engineering, medicine, et cetera, and there is no reason to assume that ‘the market’ or precognition regarding product liability will cause most developers to use foresight and caution.

Stranger

I think the comparison is misleading and abusive. Human beings are individuals with rights and obligations. A machine is a tool supposed to meet utility and safety criteria.

The case of the self-driving car killing a pedestrian shows, in my opinion, that placing robots among the biggest potential threats to mankind due to their revolt capacity is quite silly. Devising inadequate machines and/or using them inappropriately seems a bigger threat.

Here is an interesting possible explanation:

Thaaaaat…is a very good possible explanation.

Why wouldn’t/shouldn’t the car slow or stop if it detects any object in the road? Why does it have to be an object it recognizes?

I would assume the concern is with an object at the side of the road, not already in the road. A person at the side might enter the road so more caution is needed. An immobile stack of bags should not cause the car to slow down.

There’s an interesting liability difference here.

If a person driving a car hits and kills someone, they’re potentially criminally liable for that conduct. We don’t just ban them from driving until they become a better driver. We put them in a cage for some number of years.

If an autonomous vehicle does the same, should anyone go to jail? I’m not for sure saying they should, but maybe? If you can be criminally negligent behind the wheel, can’t you also be criminally negligent while writing software or designing camera systems?