An Uber self-driving car killed a pedestrian in Tempe, AZ

Absolutely, because by year 56, they’ll be reincarnating people.

Poor quality of reporting these days - it says the car was “northbound on Curry Road”. Curry is east-west, and ends (or begins, depending on your perspective) at Mill. Judging by the footage, the car had to be north bound on Mill.
The accident appears to have been here. The blue back of the sign in the map image is visible in the report footage. You can see the lights of the bridge in the background of the news footage.

Every year FOR THE NEXT TWENTY YEARS, smartass!

Except is is not “hard to make progress except via testing in real-world conditions”, because you can simulate those conditions on a closed course using animal or robotic subjects behaving in an uncontrolled fashion, and then use the heuristics to improve predictive algorithms until they are demonstrably superior to those of the average human driver, which really shouldn’t be difficult from a hardwear level; commercial off the shelf hardware systems can be made with imput and response times measured in a few milliseconds, compared to the 200+ millisecond response time of the best human operator. But, again, humans have a long-evolved predictive sense that allows us to anticipate problems long before they are logically evident, and the best machine intelligence doesn’t do that kind of advanced prediction very well at all.

It may be ‘unrealistic’ to expect automated vehicles to have no fatalities because no matter how foolproof you make a system humanity will always build a superior class of idiot, but autonomous vehicles should be able to avoid ‘normal’ accidents of inattentiveness on the part of both operators and pedestrians to a degree of at least an order of magnitude or better. If they aren’t, the technology is not sufficiently mature, especially at the current demonstration level where there are only a few dozen vehicles in a handful of metropolitan areas operating. Using open streets and unwitting pedestrians as expendible test subjects to improve machine driving algorithms is an unacceptible risk by any calculus.

There is also another issue that no one has yet addressed in a very public way but unless automated driving systems are designed with built-in security that prevents them from being hijacked by devious intruders for neferious purposes they could be used as deliberate weapons against individuals or to disrupt public commerce and safety writ large. There are currently exactly zero standards for automated vehicles (or road vehicles in general) about cybersecurity protections against unauthorized remote operation, and has been aptly demonstrated it is possible for people to hack into peripheral systems of modern vehicles and access fundamental controls including brakes, acceleration, and other vital functions. If there is no verifiable standard in autonomously piloted vehicles then someone will most certainly find and use this exploit for their own purposes. We need a set of design and verification testing standards to assure public safety and security before autonomous vehicles are rolled out en masse.

Stranger

IMO, no, but because I’m going to be a jerk and fight the hypothetical. I reject the idea that there’s a situation where the choice is either lab conditions or kill 10% more pedestrians. For example, put in all the sensing controls but don’t actually control the car; let the human driver do it and compare their responses (and outcomes) to what the computer thought it should do in every case. Continue that until its success is better than a human driver. Tesla is using a similar approach, although they let the car be in control, which can result in loss of attention by the driver (with fatal results).

Your question is an interesting one and I’m not sure how I’d answer if forced to go with that premise.

…I’m going to fight the hypothetical as well. If you take a cursory look at Googles self-driving car testing regime, their metrics, their compliance with local authorities and local laws, and compare that to Uber:well there really is no comparison. Keep the cowboys like Uber off the road: and your going to have fewer deaths. I have every confidence that Google (for a variety of reasons) are doing their best to keep people safe. Uber? Not so much.

I would quibble with that hypothetical, driverless 10% less safe to begin with, from a slightly different angle. The chance it’s actually less rather than more safe in favorable climates and conditions to begin with is far fetched IMO. Except for hacking, which is a significant if/but, but a somewhat separate issue. I don’t think it’s closely related to the quasi-political reputations of particular companies either.

The real issue is related to that kind of subjective opinion. If people who don’t really know much about it form subjective opinions that accidents they hear about in media are ‘too much’, they will oppose these systems. The systems wouldn’t have to actually be more likely to hit pedestrians than the real cross section of often atrocious human drivers. It again to me seems far fetched they would be.

I mean climate in part literally. The reason these pilot programs have been instituted mainly in places like AZ is in part prevailing weather. Also autonomous systems are less likely to reach their limits in places with relatively nicer, smoother more logically laid out roads than places with bad roads and constant construction. IOW it’s quite plausible these systems are fully practical and superior to human drivers already but in limited domains. Not as tied to particular routes as a streetcar system, but they don’t need to be able to deal with every conceivable condition to be practical in ride hailing services in particular localities, to be superior in safety to human drivers in ride hailing services, taxi’s or average people driving their own cars in those same places.

The further off vision or dream of all personal cars driving themselves, everywhere, is a taller hurdle.

Here’s that area from the car’s point of view:

Link

Maybe she blended into those bushes and the car didn’t see her moving into the road until too late. It will be interesting to see what the root cause of this was. Self driving cars are pretty far beyond the “don’t run over the person in the road” level. It would be very surprising that this would be a failure to detect an object in the car’s path.

AVs are not limited by human anatomy. I know* that early Google cars used 360[sup]o[/sup] cameras and laser rangefinders.

*I did an online Udacity class on AV software by Sebastian Thrun who was at the time the lead researcher on Google’s programme.

Thanks for that Google street view.

I see that there is a bicycle lane which moves away from the side of the road at that point. There is a cross-over point of bikes going straight, and cars turning right - just about at the place where the accident happened.

Since a bike was involved, I wonder whether the person killed was not a ‘pedestrian’ as all the media are reporting, but was rather riding a bike in the bike lane.

That would make a lot more sense. It’s a place where you have to use careful judgment and common sense, and react to a bike moving away from the sidewalk and further out into the road. This is exactly the kind of place where you would expect a self-driving vehicle to have problems.

If you look at my link, that area is also the start of the right turn lane. It’s possible that the car was turning right and hit a bike going straight. That would be a massive failure and, again, these self driving cars are way beyond that.

It should be kept in mind that although self driving cars have been involved in several major accidents, AFAIK, they have never been at fault.

We already have a set of standards for whether a driver is safe enough to be allowed on the roads, and a testing procedure to determine whether a driver meets those standards. Why can’t we just apply those standards to AI drivers? Take your self-driving car down to the DMV, and see if it can pass the driver’s license test. If it can, then it’s safe enough. Or rather, if it can but it’s still not safe enough, then the license test is too loose, and we need to tighten it up for human drivers, too.

Yes, that’s what I said.

I don’t believe for a second that ‘these self driving cars are way beyond that’. I’d be willing to bet that the car was at fault.

From the article I linked:

Missy Cummings, a robotics expert at Duke University who has been critical of the swift rollout of driverless technology across the country, said the computer-vision systems for self-driving cars are “deeply flawed” and can be “incredibly brittle,” particularly in unfamiliar circumstances.

Probably it didn’t ‘notice’ the bike lane moving away from the sidewalk, or its logic didn’t deal correctly with it, or both.

No, she was not riding the bike:

I’ll wait for further information, and hopefully the car’s video of the incident. First reports of accidents are notoriously inaccurate.

What happens in these cases like these is that a journalist asks a police officer what happened, and he makes some kind of snap judgment, “It looks like it might have been such-and-such, but I’m not sure”. Next thing all the papers are reporting “It was such-and-such.”

But you’re not really waiting for further information. You’re making a lot of guesses on what happened, what failed, and who or what is at fault.

Here’s the latest at this time:

ETA: Based on that, it sounds like the woman entered the road from the median, which in that area has a lot of trees and bushes.

Because the primitive machine intelligence that operates a vehicle literally cannot answer natural language questions on a drivers licensing test. It isn’t literally conscious, or even sentient; it follows a set of largely heuristically-defined rules about interacting with the real world as delivered through the sensor suite on the vehicle, and does so without about the same level of awareness as a game console.

The dirty truth about “artificial intelligence” research is that while we’ve made pretty good strides in terms of making machine cognition systems adapt to new information through the use of neural networks and other heuristic methods, they don’t work anything like the way brains function and don’t follow rules that we’d expect even toddlers to learn. What is needed is a set of verification requirements that assesses an hypothetical system’s ability to recognize a road hazard and respond accordingly for a variety of common situations. This won’t prevent people from deliberately throwing themselves in front of vehicles, or deer running out in the middle of the road and toward headlights (something I’ve personally experienced), nor the unseen road defect or ‘invisible’ black ice, but it at least assures that when an autonomous vehicle sees a pedestrian lean toward the road as if to try to run across it, it anticipates and avoids an accident, slowing or otherwise evading impact just as an experienced human driver learns to do. And these kinds of scenarios can be simulated with high fidelity either on a closed track or within a hardware-in-the-loop (HITL) simulation, just as we do with space launch vehicles and strategic weapon systems.

Stranger

The human driver is there to stop the autonomous car afterwards and call 911.

Seems unlikely the car would stop otherwise. It didn’t see the pedestrian.

The public needs to realize autonomous cars will have lower accident rates. They won’t be perfect. Accidents will happen and people will get hurt or die.

The hope is accident rates will decrease sharply compared to human drivers.

I made a guess at what happened, based on the facts available, and then waited for further facts to prove or disprove the guess.

**This is how all criminal investigation proceeds, and this is how all science works. **

In this case, it seems likely that my first hypothesis was wrong, based on Sergeant Elcock’s account. As more evidence emerges, other theories put forward here will be either confirmed or not. That doesn’t mean we shouldn’t put forward theories based on limited evidence - because in real-world events there’s never a 100% clear-cut case or 100% evidence available. It means that we must be willing to change our opinions as the evidence available changes.

“When my information changes, I alter my conclusions. What do you do, sir?”
    - John Maynard Keynes

These days I keep hearing machines think and, if true, in this case the machine seems to have caused the death of a human being because it ‘thought’ there was no human being there.

In my opinion, the threat artificial intelligent robots could pose in the future is not that they will become too smart to control but that they will be employed to handle situations they won’t be not smart enough to cope with.