Self driving cars are still decades away

Turning it off 1 second before the crash seems reasonable. It’s already too late. Who knows what it would try to do after the crash if left engaged.

Agreed, but subsequently trying to spin that situation as “the car wasn’t on autopilot when the accident happened”, as other posters have noted, is not reasonable.

Well, it may be “reasonable” from the point of view of the car company trying to cover its ass, but not in terms of trying to accurately describe the factual circumstances of the accident.

If an autopilot disconnects one second from a crash, it shouldn’t be on in the first place. Your chance of realizing that the autopilot is off, gaining situational awareness, and taking over to avoid a crash are approximately zero. It might as well just drive you into the crash.

Any autopilot that takes you to within one second of a crash and disconnects is a failed system, not ready for roads. Autopilots have to be able to autonomously avoid the same hazards as people do, without forcing the driver to take control that quickly. Or, they need to recognize an upcoming problem area and alert the driver to take over many seconds before there is a potential threat.

They already have that, in terms of visual (flashing lights) & audible (siren) ‘announcements’. Eventually, eventually all vehicles, along with cyclists & pedestrians will have two-way transponders; there won’t be any more traffic lights or stop signs because all vehicles will be able to seamlessly cross within inches w/o striking each other. We’re still many years from that happening, though.
There’s a lot of apparatus out there, much owned by rural VFDs; there will be significant cost to retrofit all of it with what you’re talking about.

My attention level to the road in front of me is very different when I’m in the right (pax) seat vs. left (driver’s) seat. I might be looking at something as we pass it. Although I’ll use it for a few mins/hour on a long drive to rest my leg I find even using cruise control makes me less connected to driving & therefore I don’t use it much. I’d think it’s harder to keep the same level of attention & vigilance when you don’t need to do anything than when you do & the potential to need to take over at a fraction of a second’s notice, when you should have already recognized the issue & started to take action would be tougher.
The states I do the most driving in have ‘move over’ laws - you must move over, or if not possible, to slow down when passing vehicles with flashing lights (including tow trucks that aren’t true emergency vehicles). I can sometimes see those lights, many seconds ahead, before I know if they’re even in the lane or on the shoulder. I can also determine that they’re on my road not on the overpass I’m about to drive under, even though that may look to be straight ahead & can start taking action (check the left lane to see if I can merge) well before it’s an issue. IF the autopilot could somehow alert you before it turns off so that you can reengage focus, determine what is going on & what action you could take that would be wonderful; however, if it gives up so late that even an engaged person can’t reasonably get out of said situation when it shits itself is not a system that’s ready for prime time IMO.

The driver never should have lost situational awareness, at least with current systems.

Yes, all of the in car warnings and click through agreements from Tesla loudly proclaims that it is in beta, and must be supervised, the driver must pay attention all of the time, and be prepared to take control at a moment’s notice. The problem is that Musk’s tweeting, and other hype, suggests it is better, and people miss-use what is there.

Yeah, but that’s only good for line of sight, reflections, and echoes, with the last two often causing confusion.

I do think that we won’t see real level 5 (or whatever) self driving until every vehicle is self driving with positional transponders, and they all talk to each other.

All that’s really necessary for a positional transponder is a device with a GPS and an always on data connection. Fortunately economies of scale have made devices like that pretty cheap. Yes, this will be an extra expense for all places, but probably on the order of low thousands per vehicle, if that, not 10s of thousands.

It is, and that’s one of the reasons driving with full self driving in the city is really a chore, but that’s where the technology is at the moment. It’s different on the freeway, or even just on city streets where you’ll go straight for a long time.

Anywhere there are intersections, turns, and such, it really does take more effort than just driving myself. Normally, I pay attention to what my car needs to do, and then I make it do that. With full self driving beta, I need to pay attention to what my car should do, what it actually is doing, reconcile the two, and make corrections as necessary.

(TL;DR: see the thread title)

For me, it’s Autopilot that enables me to recognize issues earlier. I don’t have to spend as much cognitive load in keeping my car in the lane and maintaining my speed. I can spend much more time looking far ahead for potential issues, like construction or emergency vehicles.

So for difficult areas, my cognitive load does not decrease, but Autopilot is making me safer by giving me more time to look for problems. In easy highway situations, with lots of visibility and space between cars, looking far ahead doesn’t take much effort, so Autopilot is a stress reducer there.

I’m aware that many people are prone to risk homeostasis–i.e., they take any safety improvement as permission to engage in riskier behaviors–but I’ve always made an explicit effort to not fall into that trap.

If it could do that, it could react to it normally and it wouldn’t need to shut off. That’s the unfortunate problem with all driving aids as they stand now.

Yeah, one saving grace of car “autopilot” is that since it is already sitting on wheels on the surface and you can’t send it into an irrecoverable stall if you read the instruments wrong, there should be no disincentive for an engaged driver to proactively override it and hand-drive if they notice a situation up ahead that may need some active management. Of course that requires someone who even with “autopilot” is in a defensive driving mindset.

But sure, mentioned many times before: part of the problem is Tesla’s puffery that pretends we have true autonomous driving when we don’t.

I don’t find parking to be a challenge and I would pay very little for this feature. I would pay a lot for a car (let’s say, a $20,000 premium) that could travel to destinations hundreds of miles away while I slept in reasonable comfort.

I’ve been wondering about this for a long time and I suspected this was the case in some collisions.

Or it could remain on and brake hard. Perhaps the brakes hard anyway under automatic emergency braking protocols rather than self-driving protocols so there is no effective difference in the car’s response and the only thing that remains is a regulatory and public disclosure gap about what accidents occurred “when self driving was engaged.”

“Autopilot” is a terrible name for the system because the public misunderstands what autopilot in planes does. If Tesla were willing to limit the marketing and sales of Autopilot equipped cars to licensed commercial pilots who are type certified for planes that have autopilot systems, then I would say it’s a good enough name.

Not within my lifetime. There will still be toddlers and there will be people who refuse to carry transponders for privacy reasons, which I fully understand.

Well, this is nice:

Given the ‘behavior’ I’ve seen on the highways of Teslas that were clearly on Autopilot (difficulty lane tracking, sudden radical adjustments) and videos of Autopilot navigating urban streets plus actually having been hit by an Auto Park-ing Tesla, I’m surprised it is this low.

Stranger

Not to mention wildlife and natural hazards that an ‘autopilot’ system will have to navigate around like a human driver does. The notion that we’ll just put an IFF beacon on anything the autonomous vehicle has to avoid is a system engineers’ solution to a product that can’t actually meet the requirements of operating in the real world.

Stranger

Low? High? It’s a useless standalone number, without some sort of normalization like miles driven. It sounds extremely low, given that there are >1M Teslas in the US, and something like 6 million total crashes, though without more data it’s impossible to know. Likewise, the claim that 70% of the ASDS crashes were from Tesla is not useful without knowing how many miles, and in what situations, the Teslas were using ASDS.

That’s certainly the model used in jet operations.

The autopilot frees us up to pay attention to the bigger picture farther into the future while HAL fusses with the moment-to-moment details. Which can, of course, be misused into us paying no attention at all. Though not napping (that I’ve seen).

If we’re in cruise and both pilots are paying less attention that we should be, and the autopilot does something stupid, we figure 5 to 15 seconds to fully orient, take over, and recover a semblance of normalcy.

Fortunately, in all but the most extreme circumstances in cruise, those 15 seconds are available before disaster strikes. So being inappropriately inattentive at the controls is almost always a recoverable error.

Conversely, in a car you’re never, or almost never, 15 seconds away from a collision if the car does something dumb. Including something subtle like it just going straight at constant speed when some other maneuver is called for. In a car your available recovery time is more like 3 seconds at best, and often less than that. IOW, there is substantially never a time in a moving car when you can be autopiloting inattentively and not be running a major uncontrolled risk.

When we in jets are down near the ground, where the time available to safely recover from an autopilot screwup is on the order of the same very few seconds a car affords all day every day, both of us are on the edge of our seats glued to the gauges & the view outside and concentrating fully and undistractedly on the task at hand. With both hands of whoever is actively flying glued to the primary controls and an itchy finger poised on the disconnect button.

Conclusion: If cars can’t by built so the “autopilot” never malfunctions, it can’t safely be used at all; Nobody, and I mean nobody, can keep that level of attention and required reaction time indefinitely while being one step removed from being the actor in the control loop yourself.

Note that throughout this essay “malfunction” includes any time the system’s model of the world differs materially from the real world itself. Whether or not the system recognizes it’s in a discrepant condition. Since the human has no insight into that internal quasi-AI model, this means you cannot effectively monitor it’s “thinking” in advance of its acting on that “thinking”.

IMO Tesla’s system should be disabled by regulation. As should most of the others. When the manufacturers are confident enough to build a car whose driver controls only operate below 5mph for parking and otherwise the driver is 100% a passenger from start to finish, then and only then, will they have built a system reliable enough that people, pros or amateurs, can be safely allowed to have it.

The alternative is exactly what we’ve gotten: A system that cannot possibly be good enough to be trusted, and cannot possibly give enough warning of impending disaster to the user to give them time to take over safely. So it’ll just keep killing people. Both those trying and mostly failing to pay close enough attention, and those not paying attention at all.

I believe General Motors’ Super Cruise self-driving technology includes eye tracking and if the driver isn’t paying attention to the road, it first warns the driver and then, I think shuts down the auto-drive feature.

I’d argue though that the same is true without the automation in the control loop. Humans simply can’t pay 100% attention to a task for hours at a time. There will be gaps. When these gaps in attention overlap with an unexpected event, a crash happens. By any reasonable standard, people shouldn’t be allowed to drive at all, but it’s a little too useful.

The baseline for driving is a bloodbath. It’s a 9/11… just about every month, in the US. Nearly 40,000 people per year. The recent dataset only has 5 fatalities tied to Teslas using Autopilot.

Automation in driving does have different failure modes than without, and probably a good portion of that is related to the points you raised; i.e., the short timeframe in which to recover from an error. But if between errors it is covering for significant deficiencies in pure human driving, that may still be a big win (and seems to be, based on the numbers, but the data we have so far is of poor quality).

I readily grant both points.

Earlier in this very thread I offered the thought that once we do get it working well, it’ll be a godsend. It’s just the uncanny valley (or really “unsafe valley”) in the middle that’ll be the problem. And we’re just starting to descend the close side of the valley. Getting the rest of the way down, across the floor, then up the other side will be a slog. A bloody & litigious slog.

Others made the points I’d make on your big post, but I’ll add one more explicitly: people are bad drivers. As the saying goes, perfect, good, enemy, etc. If self driving handles 95% of cases, and people handle 80% of cases, self driving may be a big win even when it’s not perfect. The comparison that should be made is not to a perfect human driver, but a human driver holding a phone.

It’s the “litigious” part that’s also a problem. No way do I want to indemnify self driving companies from accidents, but I also don’t think they be held responsible for mistakes from the driver in command. I don’t know what the balance is.

For now, it really is similar to the airplane autopilot. On surface streets (“near the ground”), where things are very complex, self driving requires tons of hand holding and driver attention.
When the car using self driving, I have to babysit it constantly. In most cases it’s easier and less stress to just drive myself.

On freeways (“cruising altitude”) self driving does a great job, and frees me up from speed and lane keeping, to devote more attention to my surroundings.

From what I’ve seen GM’s Super Cruise and Ford’s Blue Cruse both do eye tracking.

The Tesla full self driving beta also uses the interior camera on the the 3/Y to track the driver. I experimented with it by tilting my head down, but looking up so I could still see the road, so how you would hold your head if looking over your glasses. After a few seconds the car did warn me to pay attention.

I have to say, that sounds extremely encouraging and good to me. I’d like to see the numbers broken down in per-passenger-mile vs human driving to get a better sense, though. But 273? That’s it? That feels far better than I would have expected driverless technology to be at today, and it seems better than a human.

I agree that the current situation is not great. And it may well be that the current regulatory framework and litigation environment may not allow these ASDS systems to remain enabled, even if they can be shown to be a net improvement.

There’s another issue. We don’t yet know how to build systems without massive training sets. The Waymo approach is, IMO, a failure–they’ve been around for 13 years and although they have true driver-less cars, I would argue that they are not self-driving in a meaningful sense. They are geofenced to a tiny location that has been fully mapped in immense detail. It’s more like how cars navigate in video games, where they are essentially on rails.

Autopilot and the FSD beta don’t operate that way. They need only the image input and some coarse-grained map data for navigation (Autopilot doesn’t even need that). For all its limitations, it’s “driving” in the same sense that you can plop an experienced human driver in a car in just about any environment and they’ll probably manage to drive around without issue.

But this method requires an enormous amount of data–data that can only reasonably be collected by the public. Some can be simulated, and some can be collected without the ASDS being engaged… but the really useful data comes from where it was enabled already, and especially when the driver took over or it disabled itself.

It may not be possible to cross this uncanny valley of driving without it being in the public’s hands, at least to some extent. I think the FSD beta was handled reasonably well, where they have a (somewhat primitive, but seemingly effective) “safety score” that you have to meet to join the beta. Maybe baseline Autopilot could require maintenance of a similar score to continue using.

This is key to me. My Chevy Bolt has simple cruise control that maintains the set speed for as long as it’s active. It’s mental state is simple and understandable and I trust it. I love driving with it because the car maintains speed, while I handle everything else.

My Tesla Model 3 has cruise control that maintains speed but also looks for obstacles which require it to brake. It’s “mental” state is unpredictable because it reacts to phantoms. So while the car maintains speed, I still have to handle everything plus anticipating when the car will unexpectedly stop. It’s a net increase in work load for me.

Hence why I am dubious about accepting that figure as is. Anecdotally, I’ve seen enough Tesla vehicles with drivers clearly using “Autopilot” demonstrating unsafe and adverse driving characteristics (e.g. sweeping across lanes, suddenly braking or accelerating, switching lanes in front of an overtaking vehicle, et cetera) in addition to the large variety of issues that various users have complained about that other manufacturers providing conventional Level 2 driver assistance features seem to have adequately matured. Of course, most conventional automakers understand the liability of fielding quasi-autonomous and perform thorough testing on closed courses before releasing it into the market rather than “pushing an update through the aether” that is at best a minimally tested beta release because car accidents are not memory leaks and a catastrophic injury has much greater implications than just having to reboot your computer.

Stranger