Self driving cars are still decades away

“When does the software exit beta and become production? I think it’s pretty obvious that when robots are statistically safer than human drivers, we are in production.”

Not when injury and death occur in ordinary circumstances that in no way would strike a human as “an unpredictable corner case” - it just happened to be one they hadn’t constructed in their lot based simulations, or it just had a hiccup or a bug.

Remember, we are not talking about replacing one production system (human drivers) with another (automatic drivers). We are talking about INTEGRATING automatic drivers into a world predominantly occupied by human everything. Drivers, cars, pedestrians, bicyclists, motorcyclists, etc.

It’s not like the end goal is “all driving will be fully automated and we’ll have no more accidents”, even in theory.

These systems are by nature, designed around CARS. They look for cars, and traffic lights and signs regulating cars driving, because that’s what they themselves are. It’s much harder to encompass all the other things people do with roads all the time that have nothing to do with cars.

It’d be much more sensible to restrict full automatic driving to “this is limited to cars” roads. Highways. Not urban or rural driving.

I am also fully in favor of these automatic safety features that detect things and automatically slow down or brake or swerve the car (or give haptic feedback to that effect) - the LKAS and ACC type technology in two cars that I have today.

Most examples of “this tech is better than a human driver, on average, at noticing oncoming traffic at an intersection, or a car in the next lane while changing lanes”, is not an argument for “fully letting the tech drive the car”. It’s an argument for “deploy this as driver aid and even override in this specific scenario”.

I think you’re insulting the engineers here. Why would they omit pedestrians from their considerations?

Change brings new failures, to be sure. When electric vehicles first started cruising the streets pedestrians had to listen for new sounds, and manufacturers learned to add fake noises to the vehicles to help fix that.

Yes, it’s hard. But as you point out, we’re not expecting “no more accidents,” we’re eager to reduce them.

Cat fight!

Or more specifically, two Waymos arrive at a stop sign, and one cuts the other off, turning right from what looks to be a middle lane. Definitely an edge case, but it won’t be in the magical world where all cars are self driving.

Hehehe, as a person who is firmly in the camp of the thread title, at least the Waymo car that got cut off by the other Waymo car didn’t get angry about the other Waymo car driving like a complete jackass. So, there are silver linings everywhere, I suppose.

LOL.

Sure auto-cars are better drivers than humans.

That is why all the companies involved are obfuscating the data and one of them (Elmo) plainly lying about their (deliberate ungendered pronoun chosen to annoy them) capabilities - as they are removing sensors (lidar/radar/ultrasonic) to save a buck. Clearly their “FSD” is so far from working that removing a bunch of sensors doesn’t matter either way. The fanboys will lap it up either way.

The slow walk back from all regulators involved should make it obvious that the tech is nowhere near ready.

Apple Dials Back Car’s Self-Driving Features and Delays Launch to 2028

After board meetings, car downgraded to Level 2+ autonomy
Company pushes back launch from 2026 to 2028 at the earliest

https://www.bloomberg.com/news/articles/2024-01-23/apple-car-ev-set-to-debut-in-2028-with-limited-autonomous-driving

I don’t see why Apple would bother with a Level 2+ car. And by 2028 you would think these features would be widely available from multiple suppliers.

I find it amazing that Apple is still trying to build a car of its own.

Well one of the car companies would make most of the car. Here are some Hacker News comments:

Due to this post, I went down the rabbit hole of Rivian repair costs for “minor” fender benders. The example below is an extreme case, but there are some really big estimates out there on the Rivian forums for what look to be modest, bumper or panel dents.

FSDv12 is starting to be deployed to non-employees:

Looking pretty good. For the record, the YouTuber here is something of a fanboy and tends to post only the videos where FSD does well. But the videos aren’t fabricated, either. Just a bit of selection bias.

Also, he’s driving in San Francisco, which is probably the top spot for training attention. On the other hand, it is a city with all the various quirky things to deal with, like traffic and road construction, and also at night. Not level 5 yet, but then, neither is WayMo.

The steering and speed control are massively better than v11. That’s honestly the main reason I only use FSD on highways today; steering around corners feels clumsy, and it’s not very smooth when, say, there’s someone ahead slowing down to make a turn.

Here, there are a number of very smooth maneuvers, where it slows just enough to let a car ahead turn right or into a parking spot, without slamming on the brakes and coming to a halt. Good human drivers naturally figure out how to slow down moderately in advance so that the car ahead has moved out of the way by the time the car is there.

Another impressive maneuver is where a car is stopped in its lane ahead (to let out a passenger or something), and FSD moves into the other lane to pass (since there’s no oncoming traffic), but then the stopped car starts moving, and FSD smoothly cancels the maneuver and moves back into its lane. Just as an experienced human would have done.

There are undoubtedly tons more edge cases, but none of this stuff would be remotely doable without the end-to-end training. I’d guess we’re still at least a year off from a non-beta release, and a few years for actual hands-free driving, and maybe several years for a true robotaxi, but clearly they have the right strategy now.

Here’s the part of the calculation that bothers me regarding self-driving cars. Let’s say they get to a point where they can prove the self-driving cars cause accidents at a rate of 75% of human-driven cars. When this happens, tech and gov will start pushing to make the switch. BUT in the current situation, I’d estimate that 50% of the current accidents/deaths impact the drivers that drove badly and caused the accident. The other 50% were innocent victims.

Since I am a good driver who doesn’t cause accidents, I am in the 50% innocent pool of potential victims. When self-driving becomes an enforced reality, I’ve been moved to the 75% pool, along with all the bad drivers. As a good driver, my chance of being in an accident has increased, by quite a bit.

I don’t see this argument being supported by the evidence.

Humans have terrible instincts for probability. In this model, how did they prove the 75% rate? Sounds like they’re already on the road. Between that magical 75% moment and “when self-driving becomes an enforced reality,” did that 75% figure stay the same? Those engineers really suck!

I think what you’re really saying is you think you’re better at avoiding accidents than a robot and don’t want to be forced out of the driver’s seat because your risk would go up. Understandable, you’re not alone there. If you’ve ever been in line at the DMV, looking around, you’re likely to feel that way. But wouldn’t you like to get some of those other drivers off the road? Wouldn’t that reduce your risks, too?

Apparently it wouldn’t reduce it enough if the software is still crashing at a 75% rate of normal. Once it gets below the 50% threshold, then it would be worth consideration.

What they could do is make licensing a lot more challenging, so you’d have to pass a test like they do in Germany to get a license. If you can’t pass the higher level test, your car does your driving for you.

San Francisco is suing the state of California for allowing Cruise and Waymo to expand their services.

The man is correct.

“I don’t think Level 5 is ever going to happen,” Former General Motors R&D Chief Larry Burns was quoted as stating. “That should never have been the goal. Look at the aerospace industry. There are times when an aircraft shouldn’t fly and times when a car shouldn’t drive. Level 4 is what the goal is.”

It’s worth noting that GM Authority recently reported that The General will merge Ultra Cruise technology with Super Cruise, the Detroit-based automaker’s semi-autonomous driver assistance system. This comes as General Motors has faced increased pushback regarding its development of the technology through Cruise AV units.

Level 5 Autonomous Vehicles Won’t Happen, Says Former GM R&D (gmauthority.com)

A year ago I would have agreed with him. With the rise of LLMs, I’m not too sure.

We are very close to a new world where machines with embedded LLMs get much, much smarter. Humanoid robots acting human are just around the corner.

Maybe driverless cars are an interim technology, and at some point we’ll just have our robot companion get in a normal car and drive us.

LLMs aren’t going to be driving cars, unless you imagine it providing a stream of written instructions and reacting back in real time to the control mechanisms in the car.

The second L stands for Language.

I’m very aware of what the letters mean. You apparently aren’t aware that the top LLMs are now multi-modal, processing sound, images, video…

Also, embedded LLMs are already a thing. Google’s LLM is available in a ‘nano’ version designed to run on devices. And Samsung’s new phone has an LLM embedded on a chip.

There are also efforts underway to add real-time process to LLMs. Here’s some fun reading:

LLMs driving cars is a long ways off. We need order of magnitude improvements in inference speed, and we’ll need all kinds of other ancilliary tools to be developed. But it’s no longer unthinkable that in the medium distance future (years to decades, not centuries) our cars will talk to us and drive for us.