Self driving cars are still decades away

I was thinking about the accident some more, and the control interface with the cruise control and autopilot seems like something interesting that hasn’t been mentioned. I have no idea how they got the car moving, or kept it moving, with no driver. Perhaps starting driving, then engaging autospeed or autosteer, and then leaving the driver’s seat. As said, if no butt in the seat doesn’t disable everything as soon as possible, then it’s a serious oversight. No butt in the seat of stopped car will put it in park.

Anyway, some background. In the Model S cruise control and autosteer are managed with a stalk on the left side of the steering wheel below the turn signal lever. Pulling the stalk towards the driver once engages cruise control, and a double pull engages autosteer/autopilot/FSD (whatever is available on that particular car and at that particular location). Pushing the stalk away from the driver disables all automatic control.

Pushing up on the stalk a small amount adds 1 MPH to the cruise control set speed. Pushing up all the way adds 5 MPH to the set speed. The set speed is decreased for pushing down.

In Autopilot on a freeway the speed can be increased up to 90 MPH or something. In autosteer or FSD on a surface street the speed can only be increased 5 MPH more than what the car thinks the speed limit is. If only cruise control is enabled (not autosteer or FSD) then the cruise control speed can be increased to 90 MPH or whatever. This, to my scenario is the key point.

So, the guy gets the car moving, puts it in autosomething mode, and jumps into the backseat. Then, perhaps because the car doesn’t have FSD, it doesn’t bother to follow the road, so somebody reaches over to turn things off with the stalk. From the awkward position in the back seat or the passenger seat, they manage to instead increase the set speed. Multiple times. So with each additional grab, the car is going 5 MPH faster, and is out of control. Perhaps the speed was increased on purpose.

Normally autoeverything can be disengaged by pressing the brake, but that’s not possible in this case.

What makes me suspect the car was never in autosteer is the 5 MPH over the speed limit restriction in that mode. It is absolutely possible that the car thought they were on a 65 MPH road, but I’ve never seen that error. I have seen the car think the speed limit is lower than posted.

Some positive self driving things to report. Twice in the last month I let my Tesla Model 3 drive up and down twisting canyon roads, and it did a perfect job. Left Hand Canyon and Boulder Canyon, for anybody familiar with the area. These are mostly two lane roads with speeds between 30 and 45 MPH. The car is limited to not exceed the speed limit +5 MPH.

This is something the car could not do several years ago. Back then, the car was fine on curves, but would bail out on switchbacks. Recently it handled the switchbacks with no issue. The car drives slower on the curves than I do when I’m in control. It is very timid, but no worse than a Prius with out of state plates.

This is a long way from true door to door self driving (which I still think is a long time away), but it does show substantive improvements in capability.

Eh. I can see a few use cases (reaching to grab something from the backseat quickly, getting a wallet/cellphone out from a back pocket) that would naturally remove a drivers weight from the seat, and where having the weight sensor trigger an immediate autopilot abort while driving would make it more dangerous.

There’s little risk to err on the side of caution for a stopped vehicle, and auto shift to park without a butt present in the seat.

Yeah, I was thinking more along the lines of the “no hands” warning. Immediately start the warning process, but give the driver some time to get back in position before aborting. Of course, if just like the “no hands” warning it can be dismissed by pulling or pushing down the autopilot stalk, then it would be easy enough to keep the car driving from the passenger seat.

I’d rather the car be driving as the driver fumbles with something behind the seat than nobody driving.

I saw a report on Axios about the technology General Motors was developing for its self-driving cars. To make sure that the person behind the wheel is ready to take over in case the car isn’t able to drive itself, it was monitoring the driver’s eyes, and when the driver turned to talk to the passenger, it first warned via beeps and then brought the car to a halt. Tesla could do something similar but they don’t seem to.

When we visited MIT’s autonomous driving lab in 2016(ish), they were working on this - basically driving around eastern Massachusetts with cameras watching the driver, and a big old CPU in the back doing real-time processing. Their success rate at identifying inattention was ~97% at the time…my guess is that it’s gotten much better since then.

You can also just detect if the person’s hands are on the steering wheel, and start warning a few seconds after their hands leave. Or ideally, use both eye tracking and hand detectiom.

But that still won’t guarantee that the car can release control when it’s confused and expect the human to just step in. Human factors aren’t like thst. If the car is driving, and it’s pretty good at being able to drive for fairly long stretches, Humans will not have their heads in the game. They’ll be daydreaming, or thinking about work, and the computer could never detect fhat.

If you want to maintain strict focus on the road, it’s easier to do that while actually driving. If you need people to remain focused on the road because the autopilot can’t always handle what’s coming, then there’s no use having an autopilot.

So long as cars can self drive for some period of time in the right conditions, we are going to continue to see accidents resulting from driver inattention at critical moments.

This is exactly right.

The problem is that people don’t maintain focus on the road while they are actually driving, which results in collisions. People are already daydreaming, playing on their phone, or nodding off now.

Will some level of self driving make it worse? I don’t think so. The limited self driving available now is much better at preventing rear end collisions than a human not paying attention.

Self driving is very loud when it knows a takeover is necessary. For example, when it doesn’t think the human is paying attention, a giant bug splats in front of the camera, or something.

That leaves the instances when self driving is wrong, but the car doesn’t know it, so the human has to intervene. This has definitely caused fatal accidents, but it needs to be taken in the context of fatal accidents caused by humans not paying attention and there is no self driving.

I can think of at least one case when somebody was killed because their Tesla didn’t properly navigate a lane split. However, I can guarantee the high incidence of smashed crash cushions (I just found out that’s what there called) around here were not caused by Teslas on autopilot, but by regular human drivers unable to avoid a bridge pylon that jumped in front of them.

Yup. The trouble is that auto pilot encourages that rather than create some new problem. I suppose you could just have the car prompt you once in a while on long stretches. “You still paying attention, John?”

Technologically, can researchers now SEE a technological path to a car “that self-drives essentially as a human would, except never getting tired or inattentive?” Or is that still considered to be still in the realm of fantasy, and at the moment researchers can only countenance cars that can self-drive in a “compromised” fashion, very differently than humans?

This is what I question. I’m certainly open to evidence about it. And not simple stuff where they make somebody do an extremely boring task over and over, and hey, they loose focus! But actually studies of drivers. I find that when I’m using autopilot more of my attention is available to focus on what is around me, because I’m not also devoting brain power to the minutia of handling the car.

Most people think they’re above average drivers, but of course some people actually are above average drivers, so perhaps I’m just the exception.

That’s how they work now. Tesla does it with the “torque on the steering wheel” and GM’s super cruise does it with the eye camera. It’s not doing quizzes like, “hey, what color was that car we just passed”, but by just making sure your eyes are up. I do agree that both of those are far from perfect. The Tesla method is certainly easier to fool, unless GM’s can be tricked by novelty glasses.

I’m talking about things like an emergency situatiin the car can’t handle. Say, a bicycle covered in shopping bags swerves into the road, and the car can’t figure out if it’s a hazard or not. So, it suddenly beeps an emergency sound and tells the driver to take over.

In human factors it’s generally accepted that human reaction time to an unexpected obstacle in the road is about 1.5 seconds if the driver has been paying attention. But the perception time to see the threat is small - maybe .3 seconds.

But the reason perception is fast is because while you are driving your brain is building a model of what is ‘normal’ and what is an exception. The road is okay, the wheels aren’t sliding, there are no turns or potholes coming up, etc. So when something breaks the pattern of ‘normal’, it jumps right out at you.

If instead of driving you are reading your phone, or chatting with the person beside you, you have no situational awareness. If the car says, ‘take control!’ you then have to evaluate the entire scene to figure out what’s wrong. Is it lane lines ending? A red light ahead that the car can’t determine is a brake light or just a red light? Something in the road? Something about to cross into the road? This is a much longer process, and if it’s truly an emergency situation by the time you are aware of the problem and react, it’s very possible that it will be too late.

Have you ever driven with a bad backseat driver? Someone who likes to yell, ‘look out!’ when they see something they think is threatening? Even if you are watching the road, a generic ‘look out!’ will often cause you to look all over the place trying to see what the problem is. It actually makes you a more dangerous driver, because it messes with your situational awareness. Now imagine having a car say in effect, ‘Look out!’ when you haven’t been paying attention at all and have to take in the entire situation before you can act. That’s a recipe for failure.

It should be clear that the reaction time of an electronic system is orders of magnitude faster than that of a human. Therefore, I cannot conceive of a situation in which it would be safer to alert a human in response to an unexpected obstacle or kid or animal running onto the road instead of stopping and avoiding a collision. If the AI can’t stop in time, no way you can do better.

To drive just like a human requires general intelligence, and the AI development we have today is not on a path to a general intelligence.

Here’s an example of where cars fail without a general intelligence: Defensive driving. The car isn’t going to look at the eyes of drivers next to you to see if they see you and are paying attention. A car isn’t going to know that you shouldn’t back out ofma parking lot because the car behind you has someone about to do the same thjng, and you kmow it because you saw them get in their car and start it just before you did. The car isn’t going to understand that while there is no obstacle in front of you, the car beside you is coming up on a lane end and might veer into you unless you back off and let him in.

We do far more of this than people realize. We see a traffic jam up ahead and instinctively slow down and leave spece between us and the car ahead because we know the driver in front will be braking soon.

Good defensive drivers know not to drive in people’s blind spots, and to look to see if a driver is paying attention before passing them or driving through an intersection when someone is turning left. We also go to heightened alert in difficult situations and our reaction time drops. If we see a car ahead that’s swerving around in the lane we assume it’s a drunk or someone not paying attention, and we are extra cautious passing them or might stay further back from them. If we see children playing near the road, we are more careful. And on it goes. We make little human judgements constantly that the computer just can’t do.

Some of that, the self-driving car will be aware of things because the cars may communicate with each other (and not just the self-driving cars but also the human-driven ones). So if there is a traffic jam up ahead, this can be communicated to your self-driving car. If your car is about to back out, it may have received info from the car behind it that it also intends to back out. (I don’t know if that’s the way things will work but I imagine this is how it will work.)

The automated system is good at dodging a car that comes into your lane, regardless of why it did that. Having said that, I absolutely agree about the defensive driving. This is 100% the reason that I don’t let the Tesla decide when to change lanes, but I’ll often let it do the lane change on my command. It isn’t looking ahead to see that the other lane which appears faster now, is stopped up ahead. It doesn’t know to get to the left to let in entrance ramp traffic.

It does however have a heuristic to slow down to let traffic merge. This seems to have gotten better in the last few software updates, because when it was first introduced it meant the Tesla had a very hard time passing cars. Frequently it would slow down to let somebody in, even when there was no reason to. I haven’t noticed that happening lately.

Except when people don’t. Back when I commuted to work, I would see the aftermath of rear end collisions several times per week on my 10 mile freeway commute. The self driving also wouldn’t be tailgating in the first place.

This is another big fault in Tesla’s self driving.

Some of these limitations are programmatically correctable. It does not seem too difficult to program the car to avoid blind spots at highway speeds, use GPS to hold the left lane a bit longer to get past an entrance ramp, or model the traffic even further down the road to inform lane choice.

Anecdotal, but I remember reading two separate forum accounts from people who had gotten into (relatively minor) rear-end collisions while driving a Tesla on autopilot. Both drivers had realized something was wrong as the car continued forward, but their last thoughts prior to the impact were something to the effect of:

I wonder when Autopilot is going to brake

Now, a normal driver without self-driving features is used to being the sole person responsible for the brake, therefore when something seems wrong, the brake is applied right away. Its muscle memory. But these two drivers wasted valuable milliseconds of time as they rapidly approach an object in front of them wondering and speculating whether the machine has noticed the problem yet.

Yes, when I first started driving with autopilot I would frequently pre-brake the autopilot. As I got more used to the autopilot’s style I stopped doing that as much, because I knew when to expect it to react. I can’t think of any instances when I had to intervene because the autopilot didn’t start braking when it should have.

There have been times when I’ve been cutoff, or whatever, where it is unclear if it was the autopilot or me that hit the brake first.

The big problem with this line of thinking is that stopping is often the wrong thing to do. Cars randomly slowing or stopping for no apparent reason on roads are often causes of chain reaction rear-enders.

Having self-driving cars suddenly slowing or stopping for mirages only they think are obstacles that all the human drivers know is just blowing trash will be a severe problem. So the machines cannot be programmed with a simple rule like “Stop using your superhuman reaction time any time you’re confused about what you perceive.”

So what should be the computer’s reaction to “I’m not sure what this object in my scene is, and expect I won’t have figured it out until after we’re through/past it”?