Self driving cars are still decades away

In the couple of demo videos I saw, the car managed to avoid trash bags, pedestrians, and trucks going the wrong way without causing any kind of accident (as you would expect from a published video), and it is clear that they are not programmed to immediately slam on the brakes every time an object appears. Yet they did stop when some guy ran into the road without looking. What they would do in an extreme situation, even one ludicrously contrived right out of a Hollywood movie, is an excellent question and I wonder what is the current regulatory requirement, let’s say in California?

Darn good question and not one I’m equipped to answer.

The @Sam_Stone post you replied to that I then quoted was exactly “What to do when the AI classifier can’t classify the {whatever}?”

As Sam correctly says, the one response that does not work is for the computer to say “Hey human, I have no clue here; you need to start driving and do it right now! Despite having been paying negligible attention while I was driving.”

In my biz the nature of autopilot ops and the crew’s monitoring duties are different, but the rule of thumb is a crew who’s not closely paying attention takes 8 seconds to react to unexpected autopilot disconnect, fully restore situational awareness, and assert effective correct control to restore normalcy. And that’s with pros being paid to sit there watching the paint dry.

8 seconds is an eternity on a freeway at speed, on tree-lined country road, on an urban boulevard, or on suburban residential street. Oddly enough, the whole point of speed limits is more about limiting the flow rate of novelty to below what a sorta-focused human driver of indifferent skills and motivation can handle, rather than so much the cornering ability of cars.

Out of curiosity, I tested some things on an otherwise empty residential street.

As mentioned before, the car will not go into autopilot when there are no lane markers.

Adaptive cruise control does work in all situations.

The car does not care if I jam my left foot into the dead pedal and raise my butt of the seat.

If I disconnect my seatbelt, the car immediately reduces the cruise control set speed to 0, and begins to slow down. This is not a panic stop, but a deceleration similar to slowing for a stop sign.

A block over I tried autopilot on a residential street with a centerline.

As soon as I undid my seatbelt the car started screaming to take over immediately. It did not disable the autopilot, but did slow down, as described above. I deliberately did it before a curve, and it did steer into the curve, or at least as far into the curve as it could coasting down from 25 MPH.

Not drivers but I seem to remember a few plane crashes were suspected to be caused by over reliance on auto systems.

https://www.reuters.com/article/us-asiana-crash-hearing-idUSBRE9B905D20131210

@LSLGuy perhaps knows how big a deal that was in the industry?

Eta: thanks for the testing run report!

Furthermore, as I argued earlier in this thread, and @Stranger_On_A_Train expanded on, when the car decides to let the human take over, drivers who learn to drive with partial self-driving systems will not have the experience to handle emergency situations that more experienced drivers might be able to cope with. As I said back then, smart cars make drivers dumber.

According to this article, a group of insurance companies in the UK did some research last year and found that it takes about 15 seconds for drivers to gain situational awareness on a highway and react to avoid a hazard.

15 sec is an eternity in a highway emergency situation. They say a reaction time of about 3 sec is required.

Oops, thanks for highlighting that part so I realize what i asked about was touched upon.

There are several overlapping issues and the Asiana 214 accident at SFO pictured highlights some, but not all of them.

  1. Loss of skill. A pilot who used to fly well who spends too many years letting the automation do 99% of the flying gets rusty enough that they cannot perform well by hand when needed even in a normal situation. Add in the time, attention, and adrenaline burdens of a crisis situation and they are very likely to fail and fail badly. Instead of being able to juggle 12 balls smoothly, they can only juggle 3 balls badly and the rest get dropped.

  2. Lack of skill. A pilot trained from the git-go in highly automated flight is worse than the pilot in situation 1 above. They can’t perform well without automation at all and has no real skills, rusty or otherwise, to fall back on. They can’t hardly juggle 1 ball.

  3. Failure of focus. Human minds are ill-suited to actively monitoring nothing much happening. Boredom and inattention are the 100% predictable natural result. Just like we will get sleepy after 8+ hours on task, or 14+ hours awake, the mind will wander when there’s nothing much to see or do.

  4. Complacency. This is failure of focus’s chronic cousin. After umpteen repetitions where everything goes normally, it becomes increasingly difficult to really believe things can go wrong instantly at any moment and to behave accordingly every moment. A related issue is “surprise blindness”, where unexpected stimuli are actively ignored by the subconscious mind because that doesn’t fit the scenario the mind is predicting; instead your conscious sees or hears what it expects to see or hear. “Expectation bias” is another term of art.

  5. Mode proliferation / Mode confusion. Very simple devices do one thing clearly and obviously. On or off is obvious and if broken, the failure mode(s) are also obvious. Complex machines, and especially ones controlled by software, can be simultaneously highly reliable and highly inscrutable / unpredictable. They are paying attention to dozens or even hundreds of factors as they decide what to do now and next.

    If your mental model of how it works/“thinks” doesn’t match the actuality, the machine will behave in ways that surprise you. In complex systems interacting with complex environments, the 80/20 rule applies. You’ll become familiar by osmosis with how the 80% stuff behaves in actual use in typical situations. And will forget, or never have known, about how the oddball situations or modes you rarely or never will work when triggered. In fact odds are you won’t even know what those triggers are, either as academic knowledge or as practical recognition that “we’re in situation X with attributes a, b, and c, so machine response Y followed by Z is what it is doing / will do”. So when suddenly the machine does do Y or Z, or malfunctions by failing to do Y then Z you won’t be the wiser.


IMO the Asiana accident was an example of 1 + 4 leading to 5 and by the time they woke up from 4 and recognized 5 they were committed to impact.

All the above is stuff they pound into our heads continuously through bulletins, classroom work, simulators, etc., To try to maximize awareness of the threats and minimize their effects. And we always have two of us so ideally we won’t both be in the exact same mental rut at the exact same time.

Unfortunately all of these issues apply to partly self-driving cars. And none of that training effort or professionalism is or can be made to apply to ordinary car drivers. The inscrutability of a true self-driving car will be orders of magnitude greater than what we deal with. And the available response and recovery time to mistakes or disconnects will be far shorter.

IMO the “halfway house” of driver assistance tech such as the current Tesla “autopilot” and “FSD” is almost the worst of all worlds.

I used to ride motocross and have only been on horseback a couple of times. My comment there/then was that

In the back country I preferred a motorcycle since a horse was just smart enough to get me into trouble and not nearly smart enough to get me back out. A motorcycle is dumb, but reliably so.

That is exactly my attitude to semi-self driving cars. They need to be 100% full time with no driver controls at all. Absent that they need to be no “smarter” than current cars. The “uncanny valley” in the middle is really the “monstrously unsafe valley of false expectations”.

We already are in the worst of all worlds. Distracted, inexperienced, poorly trained, sleepy, and intexicated drivers are already on the road in large numbers. Automation that can reduce their harm is a good thing, not to be feared because it can’t reduce all harm.

Take it back to the airplane analogy. For all of the reasons listed above (which I certainly agree with) automated systems on airplanes have downsides. Would commercial air travel be as safe as it is if we said that all automated systems are banned, and pilots must be hands on flying at all times and in all conditions?

I’ve been saying over and over that I don’t think we’re within even a decade of true door to door, all condition, full self driving. But we already have Tesla’s navigate on autopilot and GM’s super cruise which both do a fantastic job of automated driving on limited access freeways.

There were over 2 million rear end collisions in the US in 2018, which was 32% of crashes that year. Automated driving technology would have avoided the most of them. Those accidents tend to happen in good conditions, on straight roads. They are mostly caused by following too close, inattentiveness, excessive speed, intoxication, and other things that don’t impede automated systems.

If all cars upon entering the freeway went into autopilot, then thousands of lives would be saved, though freeway accidents certainly wouldn’t be reduced to zero.

The two people in the Tesla accident could have done something like this:

https://www.consumerreports.org/autonomous-driving/cr-engineers-show-tesla-will-drive-with-no-one-in-drivers-seat/

I was just coming here to post that exact article.

From my experience, CR left out the part of having to buckle the seatbelt, but it’s pretty easy to reach over and buckle it, then sit down in the driver seat on top of the buckled belt. I’ll read harder next time!

The car in the Houston accident not having autopilot engaged, but only the traffic aware cruise control would make the accident completely consistent with Musk’s tweet that autopilot was not engaged, and the fact that autopilot is not supposed to engage on streets without lane lines.

Only engaging the cruise control would have worked exactly the way CR described, except the car wouldn’t steer, it would just go straight until the forward collision warning put on the brakes (unless it was disabled).

With Tesla’s over the air updates, they should be able to add the driver’s seat weight sensor to the list of things that cancel automation fairly quickly. I mean, the weight sensor is already used to initiate bluetooth pairing (so you’re phone doesn’t pair to the car just because you open a door and wake it up).

Except for where they said, “Funkhouser sat in the rear seat, and Fisher sat in the driver seat on top of a buckled seat belt.”

What did you think of that video, posted earlier, that seemed to show that Autopilot will switch on without lane lines?

Perhaps it’s only in certain circumstances. In the video the road was wet, and I have a feeling the Autopilot was mistaking different reflectivity on different parts of the road for lane lines.

It’s not supposed to turn on without lane lines. I think you are right about the wet road. Just as he’s making the left turn onto the unmarked road, you can see a line of water down the center of the road which he crosses over, and then when he pans down to the car’s screen you can see the left lane line’s position on the screen matches where that reflective line of water was as he turns across it.

I think the following errors, where the car doesn’t properly follow the road, are all due to there not actually being lane lines.

There simply might not be speed limits on the map for those roads, in which case the car will default to 45 MPH, which is what it did.

These errors are some of the reasons that I think true self driving is still a very long way off.

I do agree with one of things CR said, which is that the terms autopilot and full self driving are very poor, even deadly, choices." DrivingAssistTM", “DrivingAssistTM with navigation”, and “DrivingAssistTM for surface streets” would have been far superior. “Autopilot” and “full self driving” as names for driver assistance features are as much corporate lies are as when a mobile phone company calls a plan “unlimited.”

(I do think autopilot isn’t technically inaccurate when compared to autopilot in an airplane, as it is a pilot/driver assistance feature which takes over certain vehicle control functions, freeing the pilot/driver to handle other vehicle related tasks, while the pilot/driver also monitors the autopilot’s activities. That is too fine of a hair to cut, though.)

I’m sure they could, but someone already determined enough to:

  • buckle their seatbelt underneath them
  • strap something to the wheel
  • turn on autopilot, and then:
  • climb into the back seat

is probably equally capable of bringing along a dumbbell to place in the driver’s seat too.

We’re sorta mostly in agreement. Although it probably doesn’t sound like it to you.

Something I’ve said elsewhere is that the kinds of accidents that semi-self-driving cars (SSDCs) will have will be different from the kinds of accidents human drivers have.

I predict SSDCs will have far fewer fender benders (or stop-and-go rear-enders such as I see almost every time on my commute) and far more batshit insane WTF??!?! crashes. Many of which will be operator mode confusion such as thinking “autopilot” was installed when it wasn’t, or was active when it wasn’t, or … Or outright operator error/malfeasance like activating “autopilot” then hopping in the back seat for a nap or a shag or a smoke or … while they fool the system with a connected seatbelt and a dumbbell.

As these systems become plentiful, people will be massively incentivized to use them beyond both their own skill /knowledge and the skill of the system itself. That is problem #1 I foresee. With problem #2 being the car companies thinking that they can get away with systems that refer the hard problems to the utterly unprepared human drivers.

My narrow personal concern is how do I protect myself from a SSDC that may go insane at any moment for no discernable reason. I already drive on the insane cocaine-fueled highways of Miami. I know how to spot that behavior. But I can’t spot the car that’s about to become confused with the driver who’s in no position/condition to reassert control timely.

My larger collective public safety concern is how do we all protect ourselves from legions of car/driver combinations that are brittle failures waiting to happen.

I’ve probably said this half a dozen times in this thread, but I’ll say it again: we don’t need automated driving to get the benefits you’re talking about. All we need is for every car (not just a small minority of cars) to be equipped with front-end collision avoidance systems that include auto-braking, and we’ll get a massive benefit.

The rest of the FSD tech that Tesla & other OEMs keep talking about is, by and large, a lovely convenience. It’s also massively harder than front-end ADAS to make work.

I wouldn’t be able to use driver assist and still keep my concentration on the road. I find myself zoning out on trips with cruise control. I will occasionally turn it off. It wakes me up and gets me refocused.

I’d be a nervous wreck using driver assist in traffic. Always wondering if the car will slow down before rear ending a truck.

Yes, this gets to something encountered in IT. Sometimes a technical solution is not appropriate for a human problem—an employee viewing porn at work is not a problem for IT, it is a problem for HR (at least to my thinking, though in all seriousness IT might have a place in reporting such computer misuse).

All of these driver assist auto driving things boil down to: tell the driver they have to pay attention, put in a few technical hurdles to stop the most obvious casual or accidental violations, and then hope for the best. As you know, when talking about autopilot and FSD in official documentation and on the in car interface, Tesla is extremely clear about the driver’s responsibilities. In Musk’s tweets, not so much.

Yes, I absolutely agree with you on this paragraph. There will be new kinds of accidents, but just like the blood clots, I think on the whole, the risks of SSDC are less than human drivers.

It’s been a few decades since my grandparents were alive and I drove in South Florida, but at that time I was always behind a set of knuckles driving 15-20 below the speed limit, and the cocaine-fueled person behind me who wanted to drive 15-20 over. In both cases I would much prefer that the set of knuckles was relying on self driving features with better perception and reaction times, and the cocaine-fueled person was relying on self driving with better self discipline.

I really appreciate your horse analogy (a hobby more dangerous than motorcycles!), but to me, it’s the human drivers that are the unpredictable animals that might spook from a blowing leaf. One of the most white knuckle moments on a motorcycle is looking in your rear view mirror and only seeing the top of a head in the car behind, because they’re staring down at their phone; I’d much rather know it’s a computer that’s already started slowing down.