Self driving cars are still decades away

In the winter, my self-steering tends to fail in ice and slush, and the adaptive speed control tends to want to go into non-adaptive mode. The result is on bad days, my nearly self-driving car won’t nearly self-drive. The good thing is, I’ve got a steering wheel and set of pedals to do it myself when necessary.

If the front windscreen is fogged, the automatic wipers won’t stop, and if the rear camera is dirty I tend to run over children when parking. (Actually, I shouldn’t joke about that.) I still physically look and use my mirrors, though.

There’s still a lot of simple, practical stuff to overcome for full autonomy.

I can’t because I think you’re right. Good driving is an extremely complex task. It involves a lot more than just staying in your lane and going the speed limit.

What frightens me is the ramifications of any kind of failure. Things happen so quickly when something goes sideways. Even if there is a human driver in the car as a backup, he’ll probably be half asleep or reading a book. Can he go from that state to a state of instant, correct response in the blink of an eye? I doubt it.

I didn’t know she had a phone in her hand. Been a few months since the video came out. It’s almost impossible for any passenger to stay actively engaged in a car. You can’t help but zone out if you aren’t driving.

I don’t know of a way to prevent that.

It’s been a couple of months, so bumping this up due to an article - the Insurance Institute for Highway Safety took a look at the lane guidance systems of five vehicles to see how they handled lane continuity, especially on curves. Not so great, it seems.

This latter problem is likely what led that Tesla into the highway divider.

They also tested collision avoidance systems; I won’t post the long quotes, but the second page of the article is definitely worth a read.

I am shocked that these cars are programmed to be guided by the car ahead rather than the lane markings. I am not an engineer but it sounds like something a bad human driver would do, not an engineered software algorithm.

Where I live (Montreal), following the car ahead of you is probably the best way to stay in the lane given the lack of lane paint — or incorrect lane paint due to construction.
But you’d want to be looking several cars ahead to ensure you don’t ram a car due to sudden braking.

I assume it gives some sort of preference to first - its ongoing estimate of the path ahead, second - lane markings, and then third - the behavior of the vehicle in front of it. I do know the Tesla reads lane markers, because during one update earlier this year, a friend with a T3 found himself bouncing a bit back and forth between the left and right markers. A week later, and the update corrected that.

lol. Upbeat version: it kept him from crashing!

So perhaps in a few months there will be completely self-driving cars with no safety drivers.

“So, Billy, if all your friends drove off the bridge, would you too?”

Not for a long while. This announcement didn’t change anything.

Yeah, it’s going to take a long, long time.
The Waymo cars can drive on certain specific routes, and stop alongside a few specific curbs, like bus stops.
But they can’t handle going into a a parking lot.

Not surprising. I hate driving in parking lots and I’m mostly human!

People just walk around without looking at all. Drivers cut drive across parking lines going full speed. They back up out of parking spots blindly.

The algorithm likely prioritizes the car ahead because:
its easy to track via radar (Hey! large signature in front of me moving 60mph- while it could be a flying soda can, I’m pretty sure its a car!)
it has to anyway because its the most likely thing on the road its going to run into.
it allows the algorithm to cheat a little in the absence of other reliable information (spotty/absent lane lines, road cresting, poor weather, etc), and just play follow-the-leader. This isn’t necessarily only something a bad human driver would do- if you’re behind a semi (blocking your view of the road ahead) and cresting a hill, for all you know the road could have completely collapsed 30 feet beyond- you’re just trusting the semi in front of you to brake if it is.

This prioritization struggles when approaching a lane split with a gore point, and the car ahead suddenly changes lanes. The AI has to rapidly figure out:
whether or not its just a widening road (not actually a lane split), and if its ok to tack middle
which lane is the correct one for the route selected
whether or not the car ahead is moving in the correct lane
whether or not that radar signature for a stationary object ahead is just a discarded soda can (Full speed ahead) or something that requires IMMEDIATE EMERGENCY BRAKING NOW.

This strikes me as one of those “the map is not the territory” things.

It’s all well and good to know where the lines on the road are, but ultimately you have to drive the same way that the cars around you drive.

This is really obvious in countries that have a different driving culture, but it also handles all sorts of cases where the lane markers aren’t the full information.

One such example would be: imagine you’re on a freeway, and you see the car 8 cars ahead of you make a quick swerve to the right, then a correction. Then you see the car right behind it do a quick swerve and correction. Then the next…

The obvious conclusion is that there’s something on the road and you’re going to take the same path as all those other cars when you get there.

Add up all the cases like that, or where local custom does something different than the road markings, and it wouldn’t surprise me if the neural network has figured out that “follow the car in front of me” is more important than “follow the lines on the road”.

Another reason this might be a solid plan is that, assuming that the car is following far enough to stop before hitting the car in front of it, then following the car in front has a very high chance of being safe. You know there aren’t any obstructions in that path, because another car just drove safely through it.

And if the drunken driver in front of you goes off the road into the ditch–you follow him there.

Well, obviously it’s not the only thing to pay attention to. But good drivers, both human and computer, pay attention to what the driver ahead of them is doing.

Presumably, when the driver of you does something unexpected and you decide to follow him instead of the lane markers, you should also slow down so that if he runs into a ditch, you have space to stop before following him.

I remembered why I originally came into this thread: Waymo announced a paid self-driving service starting soon (limited customers and areas to start). So, it appears that practical commercial use of self-driving cars is going to come a lot sooner than decades.

The Waymos in Phoenix basically seems like a shuttle bus service, with tiny shuttle busses. So the question becomes why not scale up the size of shuttle to an actual bus? And I think part of the answer is this–at least in LA and New York, busses will pull out aggressively into traffic, because they have to get going, and they know that if they don’t no one will let them in because everyone hates being stuck behind a bus. An AI, you would think/hope, would be programmed to be extremely crash-avoidant. Human drivers, knowing the AI won’t merge aggressively, would blithely stream by, pinning the bus to the curb and slowing the route way down.

That might be part of the reason.

Another reason might be that buses are the size that they are because driver labor is expensive. It’s more economical to move lots of people around with a single driver in a large vehicle.

When that economic constraint is lifted, is the most efficient size of buses smaller?

To me a lot of time and money is being wasted on trying to make vehicles self-driving. Any self-driving model that requires constant driver attention or people standing by at remote locations to take over is inefficient and a waste of time IMO.

The only thing that’s going to make self driving vehicles work is inventing an actual thinking AI and giving it sensors as good as or better than human senses, or making the roads smart and confining the self-driving vehicles to those roads. The roads need to give feedback to self-driving vehicles such as lane constraints, speed limits, traffic control indications, etc.

The actual thinking AI is going to come with other problems, like how much free will can develop, and what are the moral ramifications

The smart roads are going to be expensive and require maintenance.