Self driving cars are still decades away

Why exactly did the Cruise stop?

Yet the problems that pop up aren’t getting solved in a timely manner.
This doesn’t look like collecting data.
This looks like throwing shit at the wall and hoping something sticks.

It’s not like developing a new technology is a matter of only testing more with the same prototypes.

My theory is that when it went into that turn, it clocked the guy approaching the crosswalk on the left (though he doesn’t cross) and paused…certainly valid behavior in an actual left lane.

Obviously the number of miles driven by self-driving cars is much smaller than the number driven by people. What is the accident rate per mile, or per hour on the road?

And I will say it again until I’m blue in the face, the appropriate comparison isn’t to current traffic accident/fatality statistics, it’s to what those statistics would look like if everybody had ADAS, in particular front end collision-avoidance systems including autobraking. That’s a technology that’s already here and getting better, and it should be the minimum standard.

I don’t have great things to say about how Waymo and Cruise are running their projects. Nevertheless, in a general sense, there is a very “long tail” of corner cases that the cars will have to solve. And they’ll need a fleet of cars to collect these corner cases.

Some errors are hard to fix. What should the car do around an emergency vehicle? “Get out of the way” is the answer, but that’s so open-ended that it can’t just be programmed into the car. Either the different situations are handled manually (in which case a bunch of examples are needed), or they train an AI black box to handle it (in which case even more examples are needed).

Maybe these companies could be fixing things at a higher rate; I couldn’t say. But the fact that some mistakes come up repeatedly isn’t a sign that they’re stuck. It may just mean that the particular problem is a tricky one with lots of variations.

I agree with you on that.

The risk of not focusing on other problems that could have a bigger impact, or the risk of not focusing on other ways to reduce driving miles.

Well, seat belts saved lives and all it took was for someone to realize that it might be a good idea and then work out the engineering. You can make a seat belt with ancient technology, we didn’t need hardened stainless steel and nylon webbing. This is something we can look back on and say, why didn’t we start the research on this sooner?

Traction control saves lives every day but it wouldn’t have mattered if someone in 1925 realized that having a computer with a bunch of sensors able to apply brake pressure to individual wheels could keep cars on the road. The underlying technology was 70 years away.

If the underlying technology for autonomous vehicles is still 50 years away, nobody will look back on us now and say “well they should have started sooner.”

The cost of trying something now, and failing, is much lower than the cost of waiting 50 years just in case it actually does take that long for the underlying tech to develop.

Pretending that the “underlying tech” is wholly disconnected from the problem is silly as well. Much of the underlying tech will only develop when the demand is there. Yes, there’s some dependence on things like semiconductor process technology, but if there’s a need for an “AI self driving chip”, then it will only be made if people are trying to build self-driving cars. A significant amount of AI research comes from self-driving; that just wouldn’t happen otherwise.

I live here, and I don’t appreciate being the rat in an experimental cage, no matter how much it may benefit future humanity.

You’re already a rat in an experiment, where we thought it would be a great idea to let meatbags that evolved to handle maximum speeds of ~15 mph to operate machines with 20x the mass at 70 mph. Turned out to be a huge disaster that killed tens of thousands a year. Hopefully, that experiment is about to come to a close.

Yes, but it’s a familiar experiment.

Said another way, and harkening to a political pundit of a few years ago

It’s a friggin disaster, but it’s a friggin disaster within normal parameters.

What happens to the range & accuracy of the sensors on a self-driving car, when they are dirty, or covered with mud?

I suspect the car becomes “nearsighted”.

Ideally, they’d be self-cleaning somehow. The front sensors on Teslas get cleaned by the ordinary wipers.

But I just saw this video by a Tesla FSD enthusiast. Now, I think the guy was pretty irresponsible in what he did (particularly later when he’s eating in the car). But it does show an impressive amount of resiliency: somehow he had disabled the wipers for the first several minutes of driving, and the windshield is getting pretty seriously covered by rain. It’s pretty hard to see out of. And yet the car actually does fine.

So I think in terms of basic sensing, we can expect good resiliency. In part because there are redundant sensors that can partially cover for each other. But also because the vision model may actually be superior to humans at this point.

Where Tesla FSD still fails is in the high-level decision making, like deciding which lane it needs to be in or what speed it should be at. Vision-wise, it’s close to perfect.

Oh hey, we got another. At this point they’re not even exploring edge cases, they’re just trying to learn how to make left-hand turns. Absolutely take them off the street at this point.

Officers’ initial investigation found that the Cruise car attempted to make a left turn from Gough Street’s middle lane onto Geary Boulevard and collided with the vehicle driving in Gough Street’s left lane, Winters said.

https://www.sfgate.com/tech/article/cruise-backhoe-crash-san-francisco-intersection-18331066.php

Interesting. That is a seriously complex intersection:

A one-way street southbound with 3 lanes of traffic plus two parking lanes crossing a divided east/west boulevard with 4-lanes each way which further diverges just east of the intersection.

There are no discernable lane markings for which lanes may, or may not, turn left. Bing’s imagery is equally blank that way. Streetview also shows no restrictive or instructive signage.

So the Cruise car tried to turn left from the middle of three through lanes, then slowed or stopped partway around the turn for reasons unclear, and then was struck from behind by a backhoe (!) driving straight in the left lane. Did the car get confused? Did it see a pedestrian or bicyclist or ??? Did it even know the backhoe was a moving vehicle and part of traffic versus being a stationary part of a construction zone?

Hmm. Lotta corner cases there. But not the sort of accident we ought to be seeing this late in the (still early overall) game.

Really helpful analysis in your post.

I won’t say much good about Waymo, but spending years operating in a mostly-grid and slowly expanding from that certainly made more sense than this.

more lol (at least until someone gets injured or killed)

Elon takes a Tesla for a drive using FSD:

Interesting article – there are definitely gaps in the data, but arguably waymo is already safer than human drivers (at least in California) (I do npt plan to defend this article – I just found in interesting)

Brian