Self driving cars are still decades away

I keep coming back to the thread once in a while to see how many people have been killed by self-driving cars lately. It’s very informative.

Can’t kill anyone if they’re still decades away.

Good link.

I’d love to see the same experiment with several variations. 1) no obstacle at all. 2) adult-sized mannequin as obstacle. 3) traffic cone as obstacle. And then repeat them all with the lane markers continuing past the obstacle, to check if the change in road conditions (that is, the abrupt change from cones to no cones) is affecting the results.

But there’s a problem there; more publicity is needed.

Way less people have been killed by self-driving cars than killed by people-driven, but that still doesn’t mean self-driving cars are a good idea right now.

When watching the video, if you know what your looking at self driving was not engaged when they hit the dummy. Electrek does a good take down of this.

The test was not done competently. Even if O’Dowd’s intentions are completely unbiased, the tests were not capable of answering the question they set out to answer: Will FSD stop for children? Unknown, FSD was not used. The car was enabled for FSD beta, but FSD was not turned on during the tests.

As somebody who has driven thousands of miles on FSD, I am very familiar with its shortcomings, and you can also be sure that if a child jumps in front of my car, my foot will be hitting the brake as quickly as I’m able, regardless of what FSD does. For the moment, my attitude is the only way that FSD can be safely used.

Isn’t collision detection and avoidance standard functionality no matter what mode you’re driving in? Does FSD use some different method for detecting an obstacle?

While I’m convinced that this isn’t a valid test of FSD, I’m not convinced that FSD would have done better if it had been properly engaged.

Auto emergency breaking is enabled on at least the Model 3, and you have to turn it off to make it not work (though I think alternately you could make it not work by tapping the gas pedal during the warning/breaking).

What I’d really like to see is someone else independently duplicating this (or not). As @echoreply points out, the current FSD is really just high-end ADAS, but to your point, it should handle this easily.

Interesting. As Electrek notes in their update, raw footage shows that the FSD was engaged, but then that footage doesn’t match with what was in the ad. This link delves a little deeper, raising questions about how many tests were run and wonders why The Dawn Project is being cagey with the data. If it failed as spectacularly as they’re claiming it did it’d be a no brainer to be completely transparent.

Yes, definitely this. The FSD doesn’t identify geese, other animals, and debris in the road. I think much of it comes down to the visual labeler. Are objects viewed by the camera correctly identified? If they are correctly identified is the object’s path correctly predicted to intercept the car? Incorrect predictions or labeling will result in incorrect behavior: either phantom braking or failure to brake.

When my car and child sized object are both home, I will have to do some tests and have the child sized object self propel around the parked car, and see what the visualization shows. If the car accurately labels it is a human, then FSD should stop if it correctly predicts the object’s path will intersect with the car’s path.

Yes, this also makes me very suspicious. I have very little confidence in FSD doing the right thing in many circumstances. It is excellent at holding the lane and speed appropriately. It is very poor at making judgement calls. They could probably have reported exactly what was done, and it would still be a strong critique of FSD.

As described in the Electrek article, and by me possibly in this very thread, FSD is like a motivated teenager learning to drive. Awareness is excellent, and reaction times are extremely fast, but the judgement connecting awareness and reaction is lacking.

Good description of teenagers in general.

It’s much more complex than just identifying something. Let’s say you realize a goose is in the road. Should you emergency brake? That’s a complex decision that includes considering speed, road conditions, whether there are cars driving closely behind you, etc. A car that blindly brakes for small animals could cause a freeway pileup, or lose control and crash.

I had a situation yesterday that I’m sure a self-driving car would fail. On a residential street, a van was driving in front of me. Then it veered right to park in front of a house. As I was about to pass I noticed the van was still creeping forward, and the driver was looking intently at the houses, probably looking for an address. So to be safe, I moved way over to the left side of the road to pass.

Sure enough, just as I was passing, the van veered out into the road again. If I had not moved way over, he would have hit me.

There are far too many situations that require situational awareness and general intelligence to safely drive on public roads. Here’s another: there is a parked car ahead, and suddenly a ball bounces out from beyond it and across the street. It’s no threat at all, but a human will assume that there might be a child running into the road to get it, and be extra cautious. A self driving car will just note a passing object that is no threat, and continue on.

I think you have unwarranted faith in humans.

> Daddy, I don’t want to be a visualization test object!
> Shut up and get in front of the car! We’re doing highway speeds tonight.

And unwarranted skepticism of automated systems. There’s no reason AIs can’t be trained for just that kind of situation.

I expect that eventually, if they aren’t already, FSD systems will be trained to experience a kind of “emotion” of caution, where uncertain situations cause it to slow and behave more deliberately.

Focusing on narrow situations where a human may possibly perform slightly better than an AI is missing the point, anyway. FSD is competing with all those drivers that are drunk or underslept or yelling at their kids in the back or texting or any number of other things. Those people would plow through a group of children too under the right circumstances. Avoiding those situations doesn’t require a good driver, or even an average driver–just one that’s paying attention all the time.

This is actually something where I think self driving can do pretty well, because it has eyes in the back of its head. Even as an attentive driver scanning my mirrors and aware of my environment, I still might not realize that the car that was distant in my mirror on my last scan is now right behind me, or if the lanes next to me are still clear. If I want to take evasive action, I am going to have to take time to check my sides, or risk that the environment from a few seconds ago is still accurate. An FSD system should just “know” all of that stuff, always.

The old driver’s ed line of “always have an out” still applies to FSD, but unlike a human, FSD can update that out hundreds of times a second.

Tesla FSD would be trying to decide if the van was still traffic, or if it was parked. At some point it would decide the van was parked, and would pass. Before that decision was made it would wait behind the van. If people park too close to stop signs, sometimes my FSD will wait for the parked car to go, before moving up to the intersection.

The advantage FSD would have in your situation, is that when it came time to take evasive action, it would know that the street to the left was clear, or not, and swerve, brake, or both. Of course, that’s all assuming it has correctly judged the situation.

Reading through the Tesla FSD release notes, they talk about improving prediction for how objects will behave. That is things like deciding if a pedestrian is going to step out into the street, how fast pedestrians move, or remembering that a motorcycle exists, even though it is out of view at the moment.

Legitimate testing and criticisms of FSD systems are useful, but hyperbolic and biased criticisms are damaging, just the way that hyperbolic promotion of it is also damaging. Both can setup unreal and dangerous expectations. The O’Dowd crowd may prevent adoption of life saving automation features, and the Musk crowd may encourage unsafe use and reliance on those same automation features.

I demand you stop being reasonable and pick a side.

If a self-driving car is following another self-driving car, I imagine the one in front should be able to communicate to the following car that it’s about to brake. And in theory, the faster response time of a self-driving car should mean fewer rear-end accidents.

Given the microsecond-long timeline between the leading car’s computer deciding to brake, and actually braking, and the car’s brake light coming on, and the following automated vehicle “seeing” those lights, I hardly think developing a separate and deeply secured communications channel is worth it

Just rely on the fact computers are fast compared to humans and we’re done.

To be fair, cameras do have some inherent latency. It’s going to be a few tens of milliseconds between when the brake light actually illuminates, the camera has captured and transmitted the full frame, and the computer has processed the results. But humans are in the hundreds of milliseconds at best, so it’s still doing much better.

It might be interesting at some point to encode data in the brake light itself. The LEDs can certainly be pulsed at a high enough frequency to encode a small amount of data (like that it’s about to perform an emergency stop and not just a normal one). You’d need a high-speed camera (even just a low resolution one) to receive the info, though.

You can imagine it, but current ones absolutely do not. I have heard of RFCs for such things, but haven’t seen any tests that would lead me to believe that car-to-car communication is likely in the near term. It seems slightly more doable than an intelligent AI car, but it doesn’t seem to be likely anytime soon.