Self driving cars are still decades away

Guilty. I thought this was about a different lawsuit that is going on at the moment.

I believe so. The release notes for 12.3.4 say it adds support for “legacy” Model S cars:

I haven’t followed every single point revision either, but I believe the 2017 Model S could get the HW3 upgrade. They only replaced the cameras on some models. Presumably, they only did the replacement in cases where it was actually required.

I guess it’s hard to notice a train…

It wouldn’t surprise me if trains are underrepresented in the training (heh) set. The odds of a crossing being occupied by a train are pretty low, on average, so probably there aren’t as many examples in their data as they need. That’s obviously something they need to improve.

Still–what the hell was that driver thinking? There was plenty of time to both notice that the car wasn’t slowing by itself and to stop the car safely. People like that should not be driving at all, let alone using FSD. Apparently it’s just as hard for some humans to notice as train as it is for FSD.

How would FSD if the RR crossing is clear, but the train is really nearby? Would it be smart enough not to chance it?

Depends on whether they used Dukes of Hazzard episodes in their training data.

FSD is good about how it judges approaching traffic. If I’m making a right turn, it’ll wait for quickly moving traffic to pass but go if a car has slowed to turn right. And stop again if the other car changes its mind and doesn’t turn. There’s no reason it couldn’t figure out what to do about trains. Though in this particular case, it obviously should have stopped for the barriers.

And if you noticed, the driver complained the car did this before. It did it before and they still crashed and then took it out to the public. A simple tap of the brake and it would’ve stopped.

Fool me once, shame on Tesla, fool me twice, I’m the bad driver.

Never mind the train, the car should have automatically stopped for the guard arm with the flashing red lights. That is the actual error. The fact that there was actually a train is the penalty phase of ignoring the guard.

I thought this was going to be an incident at an unguarded level crossing that are common in rural areas.

I thought exactly the same.

I like how the FSD eventually realizes that there’s an obstacle ahead, and decides to bail out at the last second.

That’s not the FSD. That’s the driver finally taking over. The FSD was going to drive right into the train.

Nearly one decade after the OP and I think we might need another decade for it to be mass produced.

Also I don’t think Tesla’s FSD is ever going to be legally FSD. I think their sensors are lacking. They’re going to stick to level 2 for a very long time, despite whatever lies they say.

And here come the trucks…

Aurora has said it plans to deploy 20 fully autonomous trucks this year, with an eye on expanding to about 100 trucks in 2025 and eventually selling to other companies. The company also is working with German auto supplier Continental to deploy driverless trucks at scale in 2027.

That’s a cool-looking truck.

Do you remember how FSD used to brake for any random light looking thing on the side of the road? Flashing yellow warning lights, school zone signs (lit or not), and such were all miss-labeled as traffic lights, and it would try to stop at them. At some point that was fixed, but perhaps it was fixed too hard, and it ignores those signs completely now.

I would have thought it would stop just because there is an obstacle. The other day it maneuvered to avoid a load of gravel that had been dumped in the street, and I’m sure that was detected as “obstacle” not “gravel pile.”

I could understand the driver’s mistake if this was the first time it happened. FSD doesn’t slow down as soon as I think it should, so I often cover the brake while letting it zoom ahead towards stopped traffic. It does start slowing eventually, just later than I would. At some point, I have to make the decision that FSD is not going to stop, and hit the brake hard. I’ve never had to do that when I think FSD would not have stopped, but particularly if there is someone close behind me I’ll intervene and start gently slowing sooner than FSD.

I suppose it’s hard to notice a telephone or electric pole too:

Waymo vehicle involved in Phoenix crash

https://www.12news.com/video/tech/waymo-vehicle-involved-in-phoenix-crash/75-d328c00a-c6e2-43fa-a3e5-0ad66ddc4f62

“That’s just a failure of the sample set, once they hit enough poles they’ll know better.”

I don’t think Waymo uses end-to-end training, so sample set size isn’t likely to be relevant.

It does demonstrate that lidar is no panacea, even for large, fixed objects. What matters is interpreting the data.

It’s kind of a fluffy piece, I think that’s kind of necessary with this subject matter because nobody in the self driving game is really being clear with their data, but this is still a pretty good article on it. Long and short: the author thinks that Waymo generally has less data, but it’s of higher quality due to it being submitted and documented by a trained staffer. Both are using AI, both still have to intervene relatively frequently. It does appear that the Waymo machines have a safer fail state, due to a lot of different factors.

From it, I’d still estimate that we’re at least a decade away from a car that’s actually self driving in the conditions a human can handle, no matter what approach you favor.

Hey, what’s this pole doing out here on the roadway?