Self driving cars are still decades away

These accidents are highlighting a point I made in this thread a couple of years ago: the more cars do for us, the less skilled the average person is as a driver.

It’s extremely unlikely that most of these cars were actively using Autopilot. One thing that Autopilot is very good at is staying within lanes. Almost to its detriment, at times–often centering itself in the wide part of a merge lane, for instance–but it’s never going to just sweep across lanes, even when they’re faded and worn.

Lane changes must always be “approved” by the human driver. Its ability to sense overtaking traffic is limited at the moment, so the driver should always look. It can cut people off, but only if the human in charge tells it to. The fact that steering and speed are being handled makes it easier and safer to look over your shoulder.

Sudden braking (generally called phantom braking) is the only one of these that’s a known, persistent issue with Autopilot. It happens to me fairly frequently, though it’s generally not severe. Anyone following at a safe distance would not have any issue other than annoyance. It does need to be improved but it doesn’t seem like an actual safety issue.

Well, a couple of things to consider. First, Americans drive about 3.3 trillion miles a year, and it’s hard to know in comparison how many Tesla miles are driven using Autopilot (or at least a casual search didn’t turn up that number for me).

But second, and perhaps more important, that autopilot is mostly being used in conditions that are generally lower-frequency for drivers - i.e., highways and sedate roadways. No snowstorms or rainy switchbacks for Tesla, at least not for the most part.

We have lines on I-10 that have been repainted in different positions over and over as Caltrans does work on different lanes, and while they paint over the ‘old’ lines they are still distinct from the virgin asphalt. Human drivers mostly don’t seem to have a problem with this but it plays mad hell on driver assistance systems (most of which alert the driver and shut off) and Teslas seem particularly confused, darting back and forth between lanes unpredictably. I’ve had this happen next to me so many times that when I see any Tesla near me I back off and maintain a healthy distance because odds are good the driver is using Autopilot and the cars around it end up playing Skip To My Lou to avoid it.

There have been, of course, several notable instances of Teslas moving at high speed hitting parked or non-moving objects because the computer vision system doesn’t recognize them and Tesla decided to remove radar and not implement lidar due to cost. If you’ve ever dealt with computer vision systems the challenges are obvious because computers—even those with sophisticated ‘deep learning’ algorithms—just don’t experience the world the way our brains do and computers commonly make basic errors. More problematic is the demonstrated problems Autopilot to drive the vehicle through city streets without herky-jerky navigation and almost failing to recognize adult pedestrians that are crossing the street at normal walking speed.

The fundamental issue isn’t that the Autopilot system isn’t impressive for a purely visual autonomous piloting system—it certainly is—but that it isn’t really fully mature for what it is advertised to be and the Tesla response to problems is to push yet another beta firmware build out to the world instead of doing thorough safety and functional testing. There is the larger problem of partially autonomous piloting systems in general in that by relieving the driver from having to be attentive to normal driving tasks it leaves the driver unprepared to respond to emergency conditions that they aren’t prepared to handle. Astute drivers understand the limitations and maintain alertness but many drivers are manifestly not astute or attentive and allow the system to drive so they can read or watch a movie on their phone, leaving them vulnerable to common failures of the system.

This isn’t just Tesla problem; it is basically any vehicle that is a Level 3 or Level 4 autonomous system that isn’t robustly designed to fail gently, and at these levels there are many so ways that are difficult to manage a failure and transfer control to an alerted driver without increasing the hazard. This kind of quantum leap of autonomy the looks so easy in science fiction movies is incredibly hard in reality, and car manufactures should not be using the general public as their open test group at manifest hazard to both drivers and pedestrians.

Stranger

This is one of the reasons I’m very glad that the NTSB is investigating phantom braking. This is a fundamental problem that Tesla needs to devote large amounts of resources to solving. In the nearly 4 years I’ve had my 3, the situation has improved, but phantom braking is still a routine occurrence.

For people who haven’t experienced it, phantom braking is very, very, rarely something like panic braking. It is almost always a brief deceleration or brake application. Perhaps going from 70 to 68 or 65, and then whatever caused the issue passes and the car resumes speed. In most cases, I press the skinny pedal to override the deceleration.

Tesla releases quarterly vehicle safety reports, but none for 2022, so I don’t know what’s up with that.

Q4, 2021:

In the 4th quarter, we recorded one crash for every 4.31 million miles driven in which drivers were using Autopilot technology (Autosteer and active safety features). For drivers who were not using Autopilot technology (no Autosteer and active safety features), we recorded one crash for every 1.59 million miles driven. By comparison, NHTSA’s most recent data shows that in the United States there is an automobile crash every 484,000 miles.

It would be real nice if they had a direct comparison of freeway+autopilot to freeway+human.

Yeah, I understand that. Hence the “per passenger mile” comment, although the “what type of passenger mile” bit you mentioned would be important, too.

Oops, yes you did say that, sorry about that.

Well, I can’t speak toward that section of I-10 since I don’t live in the area, but my experience is that as long as there was even minimal effort to cover the old lines (grinding down, painting black), Autopilot does not get confused.

Computer vision systems are my job (though I personally work on the inverse problem, while others in the company work on what we’re talking about here).

First, radar is useless for stationary objects. Several of the infamous cases happened with cars that still had radar.

Radar simply does not have the spatial resolution to distinguish a car from an overhead road sign. What it does have is accurate velocity information (via doppler), and so it simply filters out stationary objects, assuming the remaining ones are vehicles. Even with this, phantom braking was an issue (not sure why, but I suspect that because the velocity information is radial, there is always some uncertainty in the ground-relative velocity). Nor is Tesla alone here; I can’t speak to the rates, but as far as I can tell all ASDS vehicles are prone to phantom braking at times.

AFAIK, the only consumer vehicle with LIDAR is a Mercedes; aside from that, it’s Waymo and some other groups that aren’t actually selling vehicles to the public.

It’s not apparent that LIDAR actually gets you anything. It is, at best, akin to a high-resolution radar. It’s less prone to false positives since it can at least distinguish cars from road signs, but it still requires vision to determine what it’s looking at. And interpreting the LIDAR signal itself is a vision problem. The resolution is so low that important objects (like pedestrians on the other side of an intersection) may only have a few pixels representing them.

Where LIDAR helps most is when you have “hi def” maps, enabling the car to accurately position itself relative to the environment and distinguish important objects from background clutter. But this is a dead-end approach outside of geofenced robotaxis (i.e., Waymo).

If you haven’t solved the vision problem, you don’t have anything. But if you have, you don’t need LIDAR. LIDAR could be an advantage in some narrow situations, like detecting a deer about to run onto the road at night, but we’re still far away from the point where these situations dominate the failure modes.

Incidentally, the FSD beta actually does do a remarkable job at night. I don’t use it for actual driving, but I do have visualizations enabled, and it sometimes detects pedestrians that I didn’t notice myself.

What does a Tesla do when visibility drops? Slow down? Tell the human to take over?

Yes, it tells the human to take over. If using navigate on autopilot, it will drop to just speed aware cruise control and lane keeping, and if visibility drops more, it will disengage all self driving.

I’ve had it do it in snow storms and occasionally hard rain. There is warning, it’s not a sudden emergency disengagement.

Just to keep it clear (I’m not sure if these are official Tesla names), here is a glossary of Tesla features and levels:

  • automated safety features – emergency braking and accident avoidance
  • traffic aware cruise control – maintain speed, don’t get too close to the car in front
  • autopilot – traffic aware cruise control + lane keeping
  • navigate on autopilot – only on the freeway, autopilot, but also (optionally) change lanes, and take exits
  • full self driving – autopilot on city streets, but stop/go at stop lights and stop signs
  • full self driving beta – the limited access beta feature that does turns, changes lanes, and all other stuff in traffic

Broadcasting location, speed, direction, & a foolproof way of knowing your vehicle’s overall length & that it hasn’t changed since your last drive
Think you’ve put your bike rack in the hitch mount vs. towing your 4x6’ trailer vs. towing your longer boat trailer vs. something like a sculling trailer where the boats typically hang off the back end or an 18-wheeler deadheading vs. carrying a trailer.

Oh it’ll still need sensors for regular road obstacles like wildlife & occasional things like a fallen tree or a flooded road. The theory behind the transponders is that you won’t need right of way control devices (stop signs & lights) anymore as all vehicles will be able to seamlessly cross at intersections w/o colliding. I didn’t say it was coming any time soon, though

However, there’s needs to be some manual override to let the human park in certain conditions. Most summer weekends I’m at some event or festival, more often than not parking in some big grass field. I know how to park there even when there isn’t a human (or 10) directing you - first row against the edge, leave a driving lane, two cars deep, another driving lane, repeat, skip a space due to a large rock/shrub/plant. Oh wait, due to the shape of the field, I can’t head-in against that car as it would take away the driving lane. I’d also think you’d need precision at the mechanic’s garage & there aren’t sensors under the car to know where to stop on the lifting rack.

Some of that isn’t necessary. For the emergency vehicle transponder I don’t care about too much detail. “I’m a fire truck moving east on 2nd St at 38 MPH, and am 1200 feet from the intersection of 2nd and Main, where I will be turning north.” Now all of the cars listening know if they are on Main south of 2nd, they can ignore it, if they are on Main approaching the intersection they know if they should proceed through or stop, and the cars on 2nd in front of the firetruck know to pull over, but those on 2nd past Main don’t have to worry about. For advanced skill, some cars may reroute to avoid the destination, because if that’s where an emergency vehicle is going, it might be a bad place to drive right now.

In the future, when only self driving vehicles are allowed on the road, then they will all communicate with each other, and say if they have a trailer or a mattress on the roof. Even now new trucks have all kinds of systems where you put in the trailer information so the computer knows how to handle trailer braking and sway. Of course, that’s all decades away.

Ahhh, I thought you were referring to the end game, not just for emergency vehicles.
Inputting the call address will get you most of the way there but not necessarily the last couple of blocks. The issue is that is not all fire equipment is going to the exact address of the call; sometimes, orders are to stage on main street / parking lot & send manpower up; especially for subsequent alarms or perhaps hit the hydrant a block away.

We already have some of that in opticom traffic lights. Have you ever seen a white light on the cross pole; that’s the feedback to the EV that the signal was activated. The EV sends out a visual signal - fast, white strobe pattern that activates the light to turn red in every direction other than the one the EV is coming from so that the EV doesn’t need to stop or even slow down. By turning the light green in that one direction even regular cars can go to free up the ‘traffic jam’ created by the red light.

Maintaining a level of awareness high enough to be able to take over instantly while not actually driving is a nearly impossible task.

Are you going to put them on dogs? Children? Deer? A load accidentally dropped on the road in front of you? Debris from a blown semi tire?

Also, GPS is only guaranteed accurate to within 4 meters. If you are using GPS and thinking measuring the size of your vehicle matters, you are using the wrong tech. You couldn’t even reliably use GPS to tell what lane a car is in.

We are never going to be at a place where every possible threat to a car on the road is electronically tagged. If that’s where we need to be to get to level 5, it will never happen.

I’ve done this for thousands of miles. I also manage to drive without texting or watching TikTok, so maybe I’m special.

No, right now I want them on emergency vehicles so everybody knows where they are, and where they’re going, so we know ahead of time whether to pull over or ignore. Current GPS resolution is fine. Current push technology used to send messages to phones is fine. Interested receivers subscribe to notifications in the area. All of this stuff is already done on a huge scale, just not for this particular application.

In the imaginary “decades away” of the thread title, where self driving cars have become a reality, the cars will talk to each other. So a car up ahead can say “deer on the side of the road” and the cars behind it can adjust their speed accordingly. Or even better stuff, like “I want to turn left ahead” and then on coming cars might let an appropriate gap open up. It’s not a game of Marco Polo where the cars only have one sense. Use all the things.

A major problem with all the “transponders and cooperative communications” approaches is how to deal with malevolent actors. Rest assured a software tweak to turn your car into a “fire truck on a call” will be available very quickly.

Plus the fun with haxxors. Vandalizing the real world of traffic signage, signals, etc., is risky, time consuming, and local in scope. Vandalizing the same thing over the internet is good clean fun when armed with enough cloud computing resources.

Not easy problems to solve.

I’d argue that the whole approach is intractable, at least outside an L4/geofenced world. It’s not even clear to me when I read these ideas, is it vehicle-to-vehicle (V2V) and they’re coordinating amongst themselves, is it V2X (everything), V2N (some network is making decisions)…? In any case, the amount of coordination/computing and communication power necessary, and the consequences of a sudden outage in a given location (which will happen) makes it seem like a cute idea to me, but nothing that will be widescale.

A bit of searching will turn up stories of people using Opticom strobes to change lights. In one case nearby, the authorities noticed reports of lights changing when no emergency vehicle was on call. A bit of looking at logs (or something) and they were able to stake out a particular light and catch the guy with the illicit strobe.

You’ll have lots of dashcam videos of some random car forcing everybody to pull over, so impersonators will probably be caught even faster.

The same vulnerability exists for flashing lights and sirens. There is nothing physically preventing somebody from putting those on their car. Of course they’ll be in big trouble when caught.

A digital device may even be more secure. In order to talk to the central server, the transponder is going to need an oauth token, or whatever. Sure there will be bugs and hacks, but patching a single central server (really a cluster, etc.) is easier than confiscating all blue LEDs. When taken out of service, transponders would be deauthenticated.

The trick is to make sure things fail gracefully. Cars can’t require a transponder signal to believe something is an emergency vehicle, so the transponder is providing lots of extra information which is currently not available. No need for the existing warning mechanisms to go away.

Autopilot on a long straight boring highway… hell yes. Autopilot in a city? I’d be a nervous wreck.

Small quibble: cars talking to each other is a stupid way of doing things: You want them talking to the road.
This solves all kinds of authentication problems, gives a nice interface to traffic lights, a clear point where to direct information about road works, a clear point of contact for emergency services in case of the inevitable accident or “normal” incident.

IMHO stand alone “self driving” is a technological dead end: building an infrastructure that will support “self driving” with support from the infrastructure is much closer to something we can actually accomplish. It puts the intelligence outside the car, making for much more predictable behaviour of the cars and it is not dependent on some kind of “AI” that is dependent on firmware upgrades and different manufacturers ideas about how to interpret a stop sign. That will get us to a situation where you can merge onto the freeway and open the paper (or take a nap) much quicker.
The problem is that nobody is working on this. All development in the car space is toward stand-alone self driving. Something that is simply not possible with the current state of technology. Reliable* sensors are nowhere near the level we need for fully self driving.

*reliable: as in approved for safety equipment. I do not want to walk on a street where vehicles with some cobbled together “pedestrian detection” system drive around. Cars right now are using systems absolutely not fit for this purpose. In an industrial setting those systems would never be approved. It boggles the mind that people are seriously considering allowing cars onto public roads that are considerably less safe than a automated forklift in a warehouse. The equipment used there is actually quite sophisticated, but is much to fussy to use on a public road.