Self driving cars are still decades away

Thanks! Nice story!

The New York Times takes a ride in Chuck’s Tesla Y (Chuck is famous for some left hand turn videos from his neighborhood, which may have been posted somewhere upthread).

In fairness we know it has in-parking lot issues, but a lot of other things go wrong on this drive (the turn down a one-way street is particularly bad, and it makes them both laugh nervously).

Probably the most interesting thing out of this is Tesla engineers going to Chuck’s neighborhood to try to fix things. Good for the optics of his videos, but probably not much use for edge-case generalization.

There’s a Tesla driver who regularly post here, can’t remember who, who has I think the best outlook on all of this – very good cruise control/simple neighborhood drive kind of technology, but be cautious otherwise.

A pair of studies from a government auto-industry partnership (PARTS) and the insurance industry (IIHS) show that advanced driver assist systems (ADAS) reduce crashes. ADAS includes things such as forward collision warning (FCW), automatic emergency braking (AEB), and lane keeping.

The results from the PARTS study are that cars with FCW and AEB have

  • 53% reduction in crash injuries
  • 49% reduction in number of crashes
  • 42% reduction in serious crashes

the IIHS study finds about the same.

The point of posting it here is to say that full self driving is not an all or nothing technology. There are increments along the way which are useful, and worth adopting in there own right.

I’m not sure that anyone in this thread has proposed the opposite. It’s just that the end point is very, very far away from those stepping stones.

Yeah, I don’t think anyone would argue that. I do feel like I’ve talked a lot about ADAS in this thread, though, with the point that I see it in itself as a desirable end, and perhaps the most we’ll be able to create for many decades.

It’s not really an end, though. There is a smooth path between incrementally better ADAS and self-driving. The “levels” of self-driving don’t actually represent quantized functionality (plus, it’s multidimensional–it can’t just be said that a system is at level 3.14). We may see no great leap in functionality, just ongoing progress until we go “oh, I guess this is pretty much self-driving”.

It’s an end if it is decided to be one. The goal of ADAS is different from the goal of self driving and there are functions a self driving car needs that ADAS, as it’s currently conceptualised, never will. You could make ADAS incrementally better for a thousand years but if you never implement the ability for the car to start driving from a standstill then you will never have a self driving car.

Is that the threshold? If I get in a car and the only required action is to press a single button, is that ADAS or self driving?

One of the main functions of a truly robust ADAS system would be to take over in case the driver is disabled (medical issue, falling asleep). Safely getting the car from being in traffic to a place away from the road has exactly the same requirements as self-driving.

Hmm, looking further into ADAS I think you’re right. I was thinking it was clearly safety focussed but it seems that purely convenience functions, such as automatic valet, are also regarded as ADAS.

Edit: I suspect what @Maserschmidt meant was the subset of ADAS that involve safety interventions should be the end.

Maybe, but even strictly within the realm of safety I think the point stands. What about a car that intervenes on behalf of a drunk driver? Even if the human driver is ostensibly in control, there may come a point where the car intervenes totally, taking over all driving tasks when the human appears to lack control. It’s essentially self-driving at that point.

Yes and no.

I’ll liken it to the recently invented and approved Garmin Autonomí™ | Autonomous Flight Solutions.

This is an adjunct to their aircraft autopilot system that can & will land the airplane with the push of one button. The use case is Bob & his non-pilot family are flying along when Bob has a heart attack. Mom pushes the big red button and everyone arrives safely on the ground 20 minutes later. If all goes well.

In one very real sense the system is totally flying the airplane, from selecting the landing field to braking to a halt. At the same time in another very real sense it lacks a vast amount of capability that would be required for it to autonomously fly entire flights with the usual level of safety. By design.

IOW, it’s a huge safety step up from the family watching helplessly as the unguided airplane spins in, but it’s a huge safety step down versus normal manned flight. The fact it’s less than 100% foolproof is OK (or at least OK enough) since a) the alternative is so dire and b) for all the flights of all the planes equipped with the system, only a tiny, tiny percentage of them will ever use it. So even significant unreliability in that sub-system would not materially affect the total system safety experience of the whole fleet.

Back to cars.

An ADAS that brakes to a halt at the roadside when the driver shows evidence of unconsciousness / drunkenness and rarely hits something while doing so would be plenty good enough to be an increase in total road user safety. Or one that merely stomps on the brakes in the event of high closure rate on an obstacle close ahead, but still hits the obstacle albeit at a much lower speed.

But that same system’s inputs and logic may be far, far below the quality and reliability standards necessary to handle (nearly) everything (nearly) everywhere (nearly) all the time for (nearly) everyone.

Which is what “self-driving” is all about. There’s a vast gulf between “Sanding off the worst of my most egregious driving errors a few times in my lifetime” versus “Drive me to and from work every day while I read a book.”

There’s always going to be a next set of safety interventions. Now it prevents running into things, and drifting out of a lane. Maybe next gen systems will prevent the driver deliberately moving out of the lane and sideswiping an object. Then systems that enforce compliance with some traffic laws, for example preventing running red lights.

Those are both examples that current self driving tech handles, but typical ADAS doesn’t. At some point the systems may be reliable enough, and the interventions provide enough safety that they become a standard part of ADAS.

Oh, absolutely. My point is just that many of the steps necessary to cross that gulf are worthy ends in their own right. I seem to recall posts here, but maybe another thread, or just imagined, about the billions being wasted on self driving, because it isn’t here yet.

Yes, it seems like it is the ‘Fuzzy Wuzzy was a bear’ explanation I heard for fuzzy logic: you give Fuzzy one hair, he’s still not fuzzy. Two? Nope. 1,000? Maybe, if he’s a small bear?

Similarly, if you keep adding safety features to ADAS, at some point it effectively becomes self-driving.

I saved myself from an accident once by running a red light. I was waiting at the light, there was no cross traffic, and I saw the car behind me coming up fast, not braking. I accelerated through the read light and he skidded through it - but I avoided getting hit.

One of my best friends in school was killed on a motorcycle while waiting at a red light and getting rear-ended.

These edge cases are the real problem with autonomous self driving. It’s not hard to build a system that will keep a car between the lines or maintain a safe following distance from the car ahead. It’s the weird stuff that happens rarely to an individual (but happens all the time in a country of 330 million) that are going to cause the problems.

Defensive driving has saved my ass many times. Moving out of the way when I notice the person beside me in the next car is not paying attention to the road. Noticing the person coming down the merge lane is arguing with someone in his car and not paying attention to the merge traffic. Getting ready to back out of a parking spot but stopping because the car behind me has its reverse lights on - but hasn’t moved. And like I mentioned, watching for drivers barreling down on you while you are stopped.

AI is terrible at this kind of fuzzy situation and you can count on them doing very dumb things (from our perspective) on occasion. And that’s all it takes. Even if self-driving is safer in terms of annual lives lost, the spectacular edge cases will be all over the news. If we go from 35,000 traffic deaths per year caused by fallible humans to 1,000 per year caused by AI, we’ll never tolerate the AI.

In the same way, we tolerate far more auto deaths than we ever would airplane crash deaths, for the same reason: We have the illusion that auto safety is within our control, but in an airplane you have to surrender control to others. When we do that, we want assurances of absolute safety.

Yes, this is pretty much exactly my experience with self driving. I always (and I’m sure in this thread) describe the AI as a teenage driver. Perfectly competent at many car control and rule following aspects of driving, but terrible at decision making and strategic planning.

(Aside, and related to the recent dash cam thread: The number of dash cam videos where based on the drivers verbalization or horn they see the problem ahead, but then behave as to make it worse, humans may be good at prediction, but are also frequently bad at making decisions on the correct behavior.)

I think this has been discussed in this thread and others, but that is also something I’m worried about. I’m hoping that the recent confirmation of the safety of ADAS will let the safety systems creep in and reduce auto casualties, without scaring people away from them.

Bad news:

I have read lots of “Apple is working on a self-driving car” posts but no “Here is an Apple test car” description videos. Could someone refer me?

I’m a few days behind you but I wrote almost the exact same post less elegantly. The increment between layers of advanced safety systems that have proven themselves better and full self driving may eventually be so small, no one will notice the difference when it comes.

I haven’t seen any photographs, videos or descriptions of the supposed “Apple car”. And I’m skeptical of the idea that Apple is even going to sell a car. For one thing, they can’t just sell one model; some people want a sedan or a coupe or a small SUV or a large SUV or a pickup. And selling cars is vastly a different business than selling small electronic items.

I’m pretty skeptical too, but Tesla gets away with effectively just one model: the same platform with a sedan or SUV body plopped on top. The S/X are so low volume that they can be ignored.

The lineup of Android devices is way more diverse than what Apple has, whether size or shape or color or performance or whatever. But somehow Apple has captured an enormous chunk of the market with a very small number of models.