An Uber self-driving car killed a pedestrian in Tempe, AZ

I have to wonder how anyone who has watched the video can honestly conclude that.

Guys, don’t confuse Shodan with actual facts. He only like alternative facts.

Stranger

The issue is that, to the AV, pink bicycles being walked at night are apparently completely invisible.

It’s not that it detected it too late (like a human), it’s that it didn’t detect it at all (based on vehicle not braking).

Actually, quite easily, look at my response. There’s barely any time to react, IMHO, from watching that video. I don’t think anybody watching that video without any extra context or information can definitively say anything either way.

THAT SAID – I’ve explained my hesitations in making a judgment one way or another, given that a camera can make a road seem darker or lighter than usual. I think it is most likely that the road looked far more lit to a human eye than to the camera. It’s not the second video that persuades me – any camera can be set to make a night situation look a lot brighter than it actually is – but from the, well, human reports of what it looks like down that stretch of street at night and also from my experience as a photographer knowing what a night scene with lit buildings in the background looks like at a given exposure, and the car camera does look like it’s eliminating a lot of detail that would be visible to people with normal night vision.

That said, this should be moot, given lidar and radar, but I don’t know how that technology works in this particular case.

Does the car use dashcam video for navigation?

My assumption would be not (or at least not purely–I’d assume visual input is part of it), but there is some arguing here over whether a human would be better equipped to react using dashcam as evidence that they wouldn’t.

A radar based collision-avoidance system should work like this. That’s an automatic braking system already available on production cars. A self-driving car with RADAR + LIDAR should work at least as well as this.

This link makes me wonder if the video was obtainined directly by the police from the vehicle or if it was supplied by Uber.

She was homeless, and thus less of (or perhaps not) a human.

Be careful about using video footage to make a determination of the brightness levels of a scene to a human eye. I think the Uber footage understated the illumination; I think that cite overstated the illumination. (But I think the “truth” is probably closer to the brighter footage of the scene.) But, without any context or reference, a dark video and a bright video mean jack shit as far as determining how dark it really was. You can make any scene look darker or brighter depending on your camera settings.

A New York Times article on poorly Uber self-driving cars were doing in Arizona:

Yes, of course. The dashcam video is so dark that it’s almost like someone jacked up the contrast. Between headlight illumination and the number of overhead street lights, there’s no way it would be perceived that way to the naked eye.

This highlights what I worry about as we transition to autonomous vehicles, drivers become more complacent at the technology improves and start doing other things (or even dozing off) as interventions get farther and farther apart. One of the stories linked to was saying that Uber was struggling to meet their criteria of one intervention every 13 miles* by the operators. The operator in question obviously had no business dinking around with whatever it was in her lap but what will it be like when AVs need an intervention only every hundred miles? Thousand?

*An oddly unround number – it doesn’t metrify to one either. Selected for a reason?

FYI, according to the New York Times, the driver/operator/person behind the wheel was identified by the police as Rafael Vasquez, so that probably should be his phone or his lap.

From an article in Friday’s Wall Street Journal, the operator was, at first, identified as Rafael. I can’t remember exactly how it was phrased, but later in the article it said something like “identifies as Rafaela” - suggesting someone “in transition”.

Sure we can, we can definitively say that the AV failed in this situation.

Specifically the AV should have done a few things:
1 - Detect the object
2 - Identify it as something to avoid
3 - Take some sort of appropriate avoidance action (e.g. begin braking or veering or both, depending on lots of other parameters)

We don’t know where the failure was, but we do know the end result was no attempted avoidance. It doesn’t matter about lighting or anything like that, their job is to build sensors and a system that works.
If the AV had hit it’s brakes hard but still hit the person then we would be having a different conversation, one more focused on fine tuning sensor range or the laws of physics or accepting that some things are unavoidable.

I do wonder how that compares to 16-18 year old drivers. I was trying to find stats, but no luck.

I see it as that sort of thing, we are teaching computers to drive, and they are going to make mistakes.

We started in the lab, in simulations, and I’m sure the cars were all over the place and committing virtual genocide upon all and sundry around them. When they got better, we moved them to closed tracks, and threw more complicated stuff at them.

At this point, they are between being in driver’s ed, where the instructor has their hand on the wheel and foot on the brake, and near solo competence, where you just have to have an adult licensed driver in the car with you. It seems the Uber cars are closer to the early stages of driver’s ed, the google cars are getting close to being able to drive solo.

We put up with human ineptness because up until now, only humans could operate cars. Uber has an urgent need to fill a lucrative business space; we have no need for Uber to fill that space until it’s proven superior. That doesn’t need to happen on public roads. For the public benefit, it doesn’t matter if it takes 20 years of simulations with robot vehicles and robot pedestrians. This incident proves that Uber isn’t ready, and that Uber AV should not be allowed on public roads until proven otherwise.

The Times article referenced above had a picture of the accident site in daylight. In the video it looks like a road far from anything, but there are actually lots of buildings around. Which explains her crossing the street a lot better. And it would seem ti imply that there was indeed a reasonable amount of light at the scene.

Unless the car’s vision operates at frequencies other than those visible to the camera, there was NO CHANCE it could have avoided the collision.

You had a pedestrian, pushing a bike, wearing black clothing, and crossing no-where near any street lighting. Available reaction time well under half a second.

see for your self.https://www.youtube.com/watch?v=ywydalBYhic