IMHO The two safety systems should work together. Turning off the Volvo system is reckless.
https://m.sfgate.com/business/article/Uber-Disabled-Volvo-SUV-s-Standard-Safety-System-12782878.php
The point isn’t that we shouldn’t care, but let’s look at the bigger picture. 37,000 die on the roads and nobody says a word but one autonomous car kills a pedestrian (who was crossing illegally) and everyone loses their minds. There were three times as many traffic deaths as gunshot deaths last year in the U.S. but there has been no March For Our Lives to protest traffic deaths.
The point isn’t that we shouldn’t care, but let’s look at the bigger picture. 37,000 die on the roads and nobody says a word but one autonomous car kills a pedestrian (who was crossing illegally) and everyone loses their minds. There were three times as many traffic deaths as gunshot deaths last year in the U.S. but there has been no March For Our Lives to protest traffic deaths.
I’m not sure how many times I have to post this, but the death rate for pedestrians is currently 50 times higher for autonomous vehicles than regular ones. Yes, it is a single data point so the statistics don’t necessarily mean AVs are much more dangerous. But continuing to argue that AVs are safer because regular vehicles kill more people simply isn’t true.
(And people do say many words about reducing traffic deaths. And comparing vehicle deaths to gun deaths is a pointless rabbit hole that serves no purpose.)
As TroutMan points out, AVs certainly haven’t yet improved on the vehicle fatality rate.
But I don’t think it’s unfair to ask exactly why there is so much confidence that (at least in the short-term) so many experts expect that they will decrease. We’re hearing that there will be XX% of reduction in vehicle accidents, but that’s based primarily on the causes of current accidents. Introducing new systems and technologies invariably leads to new ways for them to fail. Perhaps we will see an overall increase in fatalities, but attributable more to equipment failure than human error.
I have yet to see an analysis that allows for any increase in fatalities due to system errors and failures and compares that to the decrease due to human error. Not many people want to hear, “You can expect a 40% decrease in fatal accidents due to human error, while there is a 30% increase due to equipment failure. You’ll be 10% ahead…which is great!”
I’m not sure how many times I have to post this, but the death rate for pedestrians is currently 50 times higher for autonomous vehicles than regular ones.
First off, have any other AVs killed any pedestrians, or is it an Uber problem (which would make the statistic much uglier)?
One should also consider that a large fraction of the mileage for human-operated vehicles is in areas where the risk of pedestrian fatalities is low, whereas AVs are deliberately being tested in higher-risk areas. If you could make an adjustment to the statistics based on risk factors, AVs (but probably not Uber) would likely fare better. Except in rain, snow and ice, where, I am given to understand, they still suck a bit, or majorly.
I’m not sure how many times I have to post this, but the death rate for pedestrians is currently 50 times higher for autonomous vehicles than regular ones. Yes, it is a single data point so the statistics don’t necessarily mean AVs are much more dangerous. But continuing to argue that AVs are safer because regular vehicles kill more people simply isn’t true.
(And people do say many words about reducing traffic deaths. And comparing vehicle deaths to gun deaths is a pointless rabbit hole that serves no purpose.)
It’s not true that AVs are safer wrt pedestrian fatalities, but that’s only looking at a tiny subset of vehicle fatalities. It’s very possible, and actually true, that AVs are safer overall even if they cause more death in a subset of circumstances.
It’s not true that AVs are safer wrt pedestrian fatalities, but that’s only looking at a tiny subset of vehicle fatalities. It’s very possible, and actually true, that AVs are safer overall even if they cause more death in a subset of circumstances.
Hmm, seems that pedestrian deaths are a higher percentage of total motor vehicle deaths than I thought. So yes, if AVs are actually 50 times more deadly to pedestrians, then it wouldn’t matter how much savings, if any, you get in vehicle to vehicle crashes.
The two issues with this:at 1 crash is a very small sample size and 50 times more deadly is an extremely noisy estimate, and Uber seems to be a particularly poor AV company given they need driver intervention 430 times more often than Waymo cars.
The two issues with this:at 1 crash is a very small sample size and 50 times more deadly is an extremely noisy estimate, and Uber seems to be a particularly poor AV company given they need driver intervention 430 times more often than Waymo cars.
We don’t need more statistics to reject the null hypothesis that autnomous vehicles are not safer than human drivers; we need a protocol of objective performance tests which demonstrate that a particular autonomous vehicle design is capable of avoiding accident scenarios at least as well as a human driver. (In reality it should translate into far better safety numbers because the vast majority of accidents are caused by inattentiveness and slow or incorrect response, which are problems that autonomous pilot systems fundamentally shouldn’t have.) And this is not some novel concept; we currently test all motor vehicles which operate on public roads against crashworthiness and occupant protection standards (NHTSA and NCAP), and an independent agency (IIHS) actually provides occupant safety and survivability ratings to the public based upon simulated crashes using actual vehicles. Letting autonomous vehicles of unknown and unassessed capability “test learn” trials on pubic roads with inattentive ‘safety drivers’ is pretty much the definition of negligence on both the part of state governments and the companies performing the trials.
Stranger
(In reality it should translate into far better safety numbers because the vast majority of accidents are caused by inattentiveness and slow or incorrect response, which are problems that autonomous pilot systems fundamentally shouldn’t have.)
I think this is a somewhat underappreciated point; or at least the consequences are underappreciated.
The first AVs that are significantly better than humans safety-wise will be somewhat below average in terms of basic skills. They will be awkward merging into traffic; they will fail to follow basic etiquette at a 4-way stop; they will fail to pick up on cues like “the basketball rolling out from behind a car”. As such they will–initially–be on par with a teenage driver.
But they will never get drunk, never get angry, never fall asleep, never text, never turn around to yell at the kids, never have to divert attention from one area to focus on another. They’ll react in tens of milliseconds to threats from all angles.
Since this latter category is the cause of far more deaths than basic skill deficiencies, AVs will still be better. But they’ll be better in a different way and will sometimes fail on things that would be trivial for an average human driver to handle. It will be interesting to see how society and the law deals with this mismatch.
I mentioned in another thread that part of my job involves monitoring AV trends and potential impacts. Some of us had an impromptu lunch discussion today, and one person wondered whether (somewhat to Stranger’s comments) the heuristics Uber’s vehicles had arrived at had been too dependent on daytime testing, and were thus biased toward using the visual cameras for guiding the car instead of the lidar and radar.
That seems plausible, given how sloppy they’ve been.
I think this is a somewhat underappreciated point; or at least the consequences are underappreciated.
The first AVs that are significantly better than humans safety-wise will be somewhat below average in terms of basic skills. They will be awkward merging into traffic; they will fail to follow basic etiquette at a 4-way stop; they will fail to pick up on cues like “the basketball rolling out from behind a car”. As such they will–initially–be on par with a teenage driver.
But they will never get drunk, never get angry, never fall asleep, never text, never turn around to yell at the kids, never have to divert attention from one area to focus on another. They’ll react in tens of milliseconds to threats from all angles.
Since this latter category is the cause of far more deaths than basic skill deficiencies, AVs will still be better. But they’ll be better in a different way and will sometimes fail on things that would be trivial for an average human driver to handle. It will be interesting to see how society and the law deals with this mismatch.
The only thing I’d say is, I’m less interested in comparing AV to human drivers, and more interested in the comparison between AV and (human drivers in vehicles with ADAS), in particular collision detection AND automatic breaking. ADAS is now plentifully available in cars under $30K, and isn’t particularly controversial…so the question is whether AV can significantly improve on ADAS’s benefits.
We’ve set a very difficult task for self driving cars. They are expected to work within our existing infrastructure. A complex system of roadways built since the 1920’s.
I’m a programmer and rely heavily on adding keys to a file system or database. It helps me uniquely identify records and produce accurate reports. That often means the data entry staff have to enter that code that I use as a key.
The pedestrian problem could easily be solved by wearing a unique bracelet or arm band that the AI sensors could easily recognize.
That won’t happen because the public expects this technology to work seamlessly without inconveniencing them in the slightest way.
That’s a daunting task for any programming team. It lengthens development time and increases likelihood of error. I wouldn’t want to work on this project.
They will eventually get this tech working. It may push implementation ahead several more years.
The pedestrian problem could easily be solved by wearing a unique bracelet or arm band that the AI sensors could easily recognize.
That won’t happen because the public expects this technology to work seamlessly without inconveniencing them in the slightest way.
There’s the inconvenience, and there’s also the fact that requiring every pedestrian to wear something like a bracelet all the time (or be plastered by a self-driving car) is completely unworkable. What if, for instance, I walk out of the house without this bracelet because I don’t expect to be a pedestrian that day, but end up being one for some reason?
And remember that the collision avoidance system available in many non-self-driving cars (including the Volvo used in this case) would probably have been able to stop the car in time. And those systems don’t rely on people wearing some bracelet/beacon to identify themselves.
There’s the inconvenience, and there’s also the fact that requiring every pedestrian to wear something like a bracelet all the time (or be plastered by a self-driving car) is completely unworkable. What if, for instance, I walk out of the house without this bracelet because I don’t expect to be a pedestrian that day, but end up being one for some reason?
And it won’t avoid other moving objects that don’t happen to be wearing this magical bracelet like dogs, cows, or deer. Trying to solve the problem of collision avoidance by putting the obligation on the rest of the world to stay out of the way or identify itself to an oncoming vehicle is the classic case of an engineer or programmer attempting to address a challenging problem in one module but redesigning the rest of the system, or in this case, the entire world. Autonomous pilots are by definition supposed to be able to recognize and deal with real world hazards in the “complex system of roadways built since the 1920’s. ”, otherwise they are worthless as a replacement to human drivers because we are not literally going to rebuild reality to accommodate a deficiency in the technology.
And to be clear, this isn’t a problem with autonomous piloted vehicles as a concept; it is a problem with one apparently inadequate system deployed in haste by a company with the attitude that being first to market trumps all other concerns and expressly that of public safety. There is no reason to take this specific incident as evidence that collision avoidance is any kind of insurmountable problem particularly given that technology already exsists to perform this on human-piloted vehicles without driver intervention.
Stranger
I only mentioned a bracelet as an example that can’t be used. It’s a easy fix for programmers, but not practical in implementation.
Self driving cars have to operate in the world as it already is. Roads aren’t going to be modified, the public"s habits won’t change.
I’m impressed how well these cars operate. There’s still more years of tweaking and testing before it can be fully rolled out.
And to be clear, this isn’t a problem with autonomous piloted vehicles as a concept; it is a problem with one apparently inadequate system deployed in haste by a company with the attitude that being first to market trumps all other concerns and expressly that of public safety. There is no reason to take this specific incident as evidence that collision avoidance is any kind of insurmountable problem particularly given that technology already exsists to perform this on human-piloted vehicles without driver intervention.
Stranger
And whose lead developer had to be fired for stealing technology.
New electronic products always have more failures at introduction. And counting the death rate now is like not flying because lots of test pilots crash. In this case, alas, above populated areas.
Honestly, the last thing in the world I’d expect to see at 2AM is a woman pushing a bicycle. The roads are deserted and my focus is getting home after a long day and to bed.
How about at 10pm on a street with bus service, around the corner from a light-rail station, when this incident actually happened? Still not expecting her? :rolleyes:
They may roll the sidewalks up where you are, but I’ve been the person crossing the street at 10pm (or as late as 1am from the last train) walking or riding my bike from the suburban train station, and I didn’t think I was forfeiting my life by doing so. Also, I often see people walking bikes across main streets, because some bike safety & training programs tell riders to do so.
If you can’t focus on driving as well as getting home because it’s late, leave your car where it’s parked and take a (human-driven) Uber home.
But they will never get drunk, never get angry, never fall asleep, never text, never turn around to yell at the kids, never have to divert attention from one area to focus on another. They’ll react in tens of milliseconds to threats from all angles.
And they absolutely will not stop, EVER, until you are… oh, sorry. Wrong thread.
But the real meat of the test is the practical portion, where you get behind the wheel and have to demonstrate actual driving. That’s the part that humans stress out over, and that’s the part that’s hard for computers. And so that’s the part that, if the computers actually can pass it, they should be allowed on the road just like anyone else who can pass it.
The practical portion is the one where my first attempt in Florida was a fail for not knowing wth a “three-point turn” was. Tests in Spain don’t involve telling you what specific maneuver to perform, just what to do (“turn around”).
In order to subject a car to “the same tests as a human”, the car would need to be able to recognize “three-point turn” and “turn around” and a myriad other expressions, in different languages, accents and voices. That’s a problem IT folks have been working on for a while but it’s a completely different one from being able to get from A to B legally and without getting into any avoidable accidents.
Given that Uber has demonstrated time and again that they don’t give a shit about legality, I don’t see any reason to trust anything they develop.
I mentioned in another thread that part of my job involves monitoring AV trends and potential impacts. Some of us had an impromptu lunch discussion today, and one person wondered whether (somewhat to Stranger’s comments) the heuristics Uber’s vehicles had arrived at had been too dependent on daytime testing, and were thus biased toward using the visual cameras for guiding the car instead of the lidar and radar.
A lead engineer at Uber wants to do away with lidar and rely on visual cameras for cost reasons, so your suspicion of bias seems very possible.
But that can’t be entirely it in this case. I’d expect the car to have at least slowed before impact even if relying entirely on visual cameras, but there was no reaction.