Self driving cars are still decades away

Yes. The term I use in aviation is “brittle failure modes”. Stuff doesn’t bend. It just snaps. The crew is keeping up with the flow of events then is suddenly and utterly overwhelmed, stupidity ensues, followed quickly by a mass casualty event.

good thoughts and analogies … @Dr.Strangelove and @LSLGuy

I guess we will see more of those types of events in our future lives…

and I can completely understand the stupor (and failure to act) of going from “relaxin with an audiobook” to “urgent and very significative human intervention needed within 3, 2, 1 seconds”

We can acknowledge the potential of automated cars while still being cautious about introducing the technology too soon. Allowing more vehicles on the road in one location doesn’t do very much to develop the global technology base needed to successfully implement driverless cars.
We might need to accept our technology level isn’t quite there yet.

First, I didn’t even see the sign in the video the first time (that may just be a resolution issue).

Second, there wasn’t that much water on the road. It’s really hard to tell from the video, but to me it looks like the car crossed the wet part, hit a dry patch and steered off the road. The voiceover claims it “slid off the road” but that’s not what it looks like to me.

If you go fast enough, a car can hydroplane on 1/2" of water. At which point you’re basically driving on something as slick as wet ice and where the car goes is more or less random. Maybe the wind pushed it left. Maybe slightly shallower water on the left provided slightly less good hydroplaning and hence more drag. Maybe as the water got deeper it got deeper quicker on the left and the car started drifting leftward from the asymetrical drag before full-bore hydroplaning kicked in.

The underlying mistake was trying to drive fast in any water, even shallow water. But that’s a mistake a human can make too.

Or maybe you’re 100% correct and once the car was in 3" of water extending from on the road to well off the road to the left it just had no idea where the road boundaries were and aimed for the middle of the homogenous flat brown expanse in front of it. Oops. Now that’s a mistake a human would be unlikely to make.

(I haven’t watched the video)

Based on my experiences with FSD, that is entirely possible. When lane markings vanish, FSD will generally make one of several reasonable decisions: stay to the left, stay to the right, center in between whatever lane markers are available. Exactly why it makes a particular decision is unclear to me.

For example, on my typical commute there are places where the right lane merges into the center lane, so for a stretch there is a double wide lane, with no center dashed line. Sometimes the car will stay to the left, on the existing dashed line between the center and left lanes, sometimes it will center in between the dashed line and the right edge of the road, and sometimes it will move all the way to the right edge. What exactly it does seems to depend on speed, visibility, and traffic density, but I’ve not figured out any pattern.

On residential roads without lines, it will behave similar to how most people drive. It will tend to stick towards the left side of the center of the street, and move to the left or right to avoid oncoming traffic and parked cars.

Anyway, that is a long way of saying, once the lane markers are gone, FSD will make a guess about where to go, and that may be a swerve off to the side. I have had FSD behave erratically on I-70 over Vail Pass, where the lane markers are worn off, and the road has lots of curves.

I suspect, having watched the video, the car was in fact hydroplaning and had also steered toward the left. Once it got to the dry bit of road the front wheels got some grip and the left steering input took effect.

I frequently avoid watching cited videos on whatever topic because the mega noise overwhelms the micro signal. And I don’t care enough to spend 20 minutes watching drivel waiting to find the info that could have been imparted in two written sentences had the vid producer (or vid citer) cared enough to do so.

In this case I did watch the vid. Or at least the first ~2 minutes. You might enjoy doing so too.

Once we get past their first ~30 seconds of advertising for their “channel”, the meat only takes another minute or so to see.

That is actually a channel I watch occasionally, I just hadn’t bothered with that one yet.

I don’t think it is a lane marker thing, as the lines are visible through the water. The car does a very good job of seeing lane markings, even in wet and glare where it is difficult for me to see them.

Like others, I expect that hydroplaning and then hitting the dry spot sent the car off in the wrong direction, with no time to for FSD to recover.

One thing to consider is that when initially hitting the water, the car would have slowed down, causing FSD to apply power to maintain speed (it will slow for curves). So the car is hydroplaning, under power, and then the front gets traction, causing it to veer off the road.

“the car did not decelerate appropriately when crossing a stream of water.” No, FSD currently does not do anything to avoid road hazards. The car did not decelerate, because that is currently not a function of FSD. This is a limitation of FSD, but the accident is entirely the fault of the driver for not taking over to avoid road hazard.

On the other hand, I can easily see a non-FSD equipped car having a similar accident. According to the video, the flood signs are frequently inaccurate, and it looks like the type of road where heat mirages could appear. So even if the driver sees the water from a distance, they will have to register it is not just a mirage, and then react. The sign should have been a clue, and it seemed to me there were about 4 seconds from the sign to the water, which ought to be enough time to react. At least I (as a perfect driver who never makes mistakes) would have been slowing down pretty hard about two seconds past the sign, when the water is visible. It might be visible earlier, but I’m just going on what I can see in the video.

My favorite part is when the guy gets off the bus, walks over to the car, and bangs fruitlessly on the windows. I feel ya, brother. :sweat_smile:

I wonder what the cop was about to say to the driver? “Stop blowing your horn, it won’t help” or “let me give them a ticket”. I mean, blocking the bus lane should be a ticketable offense, right?

What purpose are we working towards in developing cars that are self-driving if it’s not to take the mental load of driving off of drivers though?

I mean capitalism wants them so they can avoid paying human drivers, but what would motivate passenger car owners to buy a self-driving car if you are still at fault when there are accidents?

Fewer accidents. And easier driving. It’s effectively advanced cruise control.

I think this is the key part. We’re not there yet. Without some level of testing and experimentation, we’ll never get there. For better or worse, some (one?) car companies have decided that letting their limited self-driving systems run on public roads with untrained drivers is a valid system for testing. I think forbidding public use until the system is perfect means the systems will never be perfect. However, I think there is a place for reasonable people to pick the criteria that must be met for systems to be tested on public roads.

In certain situations, such as on interstate freeways, the current level of self-driving does greatly reduce the mental load on the driver. Many companies have systems that can do this. These still aren’t reliable enough to allow naps, game playing, video watching, or reading, so the driver still has to pay attention, but automating the car control aspects of driving do remove a mental load from the driver.

With the FSD version of spicy cruise control, I can drive 9 or 10 hours without feeling worn out. After about 6 hours with basic cruise control I need a nap.

Correct.

But I feel the size of the current wave of driverless vehicles does nothing but fill some checkboxes in some middle manager’s excel.
It doesn’t benefit development, it certainly doesn’t benefit the public.

What data are you getting from 20 cars that you cannot get from 2?
The amount of reported errors (and the repetitive nature) suggest there is way more test data than the developers can handle.

“May stop quickly.”

Right. Or at least, maybe.

The argument being made, that self-driving cars are inevitably going to save a lot of lives and therefore are worth a good deal of risk, could have been made at any point after the concept of a self-driving car was hypothesized. Autonomous prototypes that don’t rely on embedded road hardware have existed since the 1970s. Proponents could have argued at the time (and maybe did, I wasn’t much alive) that it was worth the risk to put these on public roads to further develop the technology.

Those researchers probably never imagined the hardware required to get to where we are today with autonomous vehicles, and in 50 years we may feel the same way about 2023 technology. I’m not so much worried about autonomous vehicles wiping out large numbers of civilians, as I think society will pump the brakes (heh) on real-world testing after 1 or 2 tragic accidents. What I’m worried about, and how I felt when I started this thread, was that we might not be anywhere close to cracking this thing. We don’t even know what a self driving car will look like, as we’ve already gone from cars with radar/lidar/cameras being the standard to cars being able to basically just as well with cameras alone. The first autonomous car may just be a regular car with a generalized AI in a humanoid body in the driver’s seat operating the controls, and that may be 50 years away.

And if that happens, will we look back on our efforts right now and roll our collective eyes at the risks we took given how far away we really were?

What risks? As to risk in lives, the evidence is that self-driving cars have a lower accident rate than human-driven cars. As to the risk in money invested it is normal for a lot of technologies to be invested in–while only a few are successful.

40,000 people die every year in car crashes. That’s 100 people every single day. The number of people killed by years of autonomous vehicle testing is what, a couple dozen?

If anything the “risks we took” should apply to not pushing harder to get humans out of the car driving business.

10x the rate of weird corner cases that pop up.

20 isn’t nearly enough, though. Even 20,000 isn’t enough.

Or wonder why we didn’t start sooner. It’s not like there’s some fixed date at which self-driving becomes possible. If we wait now, that future date also moves forward.