Self driving cars are still decades away

It’s not just AIs with that problem.

I thought of exactly that cartoon as I was writing my post. Thanks for hunting up the cite.

I also thought of the apocryphal collection of “Funny stories gleaned from motorists’ reports to their insurance companies after accidents” which was a regular feature of Reader’s Digest and other “literary” rags of a certain era. Which collection almost always included something like

I had to swerve three times before I hit him.

and

The tree just jumped out and hit my car.

Let me just drop this here:

FSD v12.4.1 has just been released to employees with software v2024.15.5 so it also includes the Spring UI update and all branches will be aligned.

According to Elmo it’s a significant improvement with 10x fewer interventions and they have eliminated the need for you to have your hand on the steering wheel. They will still monitor that you are facing forward. The roll out to the public could be as soon as this weekend. This of course assumes no big problems found with employee testing. v12.4 had issues and was pulled.

My wife and I met some friends at a restaurant (Doughbird) last night. As we were talking, I noticed a Waymo in the parking lot. It must have been very confused, because I watched it circle the same parking lanes at least 6 times. I was thinking (“Oh, it’s in an infinite loop!”). It eventually bailed out and turned the other direction, but while it was “stuck”, it was amusing.

v12.4.1 made it from employees to the early beta testers and it didn’t pass muster. They gave up and are giving the FSD people the new AI en masse as I type this. No telling when the next FSD will happen.

Pathetic:
Imgur

How can Waymo expect to scale beyond a few areas if their cars depend on hi-def maps telling them whether a damn telephone pole is a hazard or not? What if someone installs a new telephone pole and Waymo hasn’t updated their maps yet?

Tesla FSD is not perfect but it doesn’t depend on hi-def maps. It runs purely on camera input and low-def maps for navigation. It doesn’t even need the low-def maps in many cases.

The more I think about this the more laughable it gets.

Anyone who’s played a car racing video game knows there are two types of objects: things that you can crash through easily and barely scratch your paint; and things that are totally indestructible, where you can crash into them at 200 mph and they won’t budge an inch.

The only difference between these two is that the developer “assigned a low damage score” to the first one. Could be a telephone pole or a small shrubbery. Doesn’t matter. It’s just a number in the game object database.

It’s absolutely embarrassing that Waymo uses the same defective method to determine whether it’s ok to collide with something or not.

If they are using an old-fashioned if/then decision tree, then they need something akin to damage scores to make the least-bad decision once a collision with something becomes inevitable.

Assigning very high damage scores to things assessed to be pedestrians or bicycles, with lesser scores for big trucks, moving cars, parked cars, roadside signage, and shrubbery makes sense. Where exactly non-frangible fixed infrastructure sits on that hierarchy is debatable. But it fits someplace.

in the waymo example, is there any evidence that a collision with something was inevitable? If so, that seems scarier than the fact that the car chose to hit the pole… The car is supposed to be safe enough that it doesn’t put itself in a position where a crash is inevitable.
Sure, a third party could do something that makes the waymo car face an inevitable collision. (say, a kid running out into the road, or another car swerving into Waymo’s lane, etc).But in that case, why would the Waymo company recall its cars?–the decision to hit the pole would be praised as the right thing to do., and a fine example of how AI is safer than a human driver who might have hit the kid, or whatever.

I expect the police officer had his flashing lights on–and they are just too dim to notice, aren’t they?

Not sure what happened there but Teslas in FSD mode absolutely recognize emergency lights and are programmed to slow down or stop appropriately. I’ve personally experienced it. Of course the system could fail.

Regardless, the driver of the vehicle was, by his own admission, on his phone and not paying attention. This is not the fault of FSD. It was a driver not using it appropriately.

Words have meaning. “Full self driving”: what do those words mean to you?

Tesla could have said “partial self driving” or “advanced cruise control” but instead chose to deliberately mislead the public.

Not using the exact name confuses the meaning. It’s called “Full Self-Driving (Supervised)”. It’s made more than abundantly clear that you need to pay attention when you sign up for it. The sales people emphasize it, in my limited experience. It’s fully explained in the manual and, again, very clearly. It’s not buried in a bunch of small print.

Here is the page from the manual:

https://www.tesla.com/ownersmanual/modely/en_us/GUID-2CB60804-9CEA-4F4B-8B04-09B991368DC5.html

And yet if I go to tesla.com and pretend I want to order a Model Y, one of the available features is called Full Self-Driving Capability; only the fine print mentions the need for the driver to be engaged.

I guess that we’ll disagree on what’s fine print. There’s one short paragraph below the bullet list and the first sentence is

"The currently enabled features require active driver supervision and do not make the vehicle autonomous. "

When you first activate it you get a dire warning in big letters that should scare anyone out of posting to message boards while it’s operational.

They’ve had a history of this system failing and striking emergency vehicles.

Indeed. That article is from two years ago. There have been significant improvements since then over several software upgrades.

Yet it doesn’t seem to have fixed this issue.

According to that article, there were sixteen crashes over four years. In fifteen of them the car warned the driver and in eight of them the car at least significantly slowed. How does that compare to non-FSD cars over the same period per mile driven?