As a side note, while I’m not going to bother going through the Verge article, many cases that are blamed on Autopilot or FSD turn out to not be the case at all. One recent example:
Driver plows into a woman and leaves scene. Blames Autopilot. Turns out she’s just a piece of shit that committed a hit-and-run. Will be spending time in jail, thankfully, but all the people that read the original news item probably still think “gee, Autopilot is really dangerous with all these accident reports”.
That’s all fair, but it leaves out a factor: while an operator may not have awareness of their own ignorance (“unknown unknowns”), it’s not as difficult to have awareness of being in a situation where they may be ignorant (“known unknowns”). In flying terms, this is like me (who only flies VFR) finding myself in IFR conditions. I know that I’m in a situation where my mental model is probably wrong, even if I can’t identify where. So the goal is to never enter those conditions in the first place. And to have at least enough training that I have a reasonable chance at leaving the conditions if I find myself in them.
I was thinking a bit more about how Tesla might achieve that, and came up with this:
There’s a high density of Teslas in many urban/suburban areas. Enough that you can create a disengagement map: people who had been using FSD in supervised mode but where the driver took over for some reason.
If you can identify a large enough area with a negligible disengagement rate, then it might be suitable for unsupervised driving. Especially when combined with other factors like weather and time of day. It only works for that area, but it’s entirely predictable when a driver will enter/exit and so the driver can be given a warning to take over (otherwise, the car pulls over and stops). The car can even navigate in a way that prefers these areas.
There is a problem, which is that once everyone is driving unsupervised, then you mostly lose the disengagement data. Or at least you lose data from the minor disengagements that might be an indicator for worse ones. But you could solve this by occasionally requiring supervised drives, say 5% of the time. Drivers get most of the benefit while still contributing to the database.
In other Cruise news, GM reached a settlement of $8 million with the pedestrian dragged under one of its vehicles. The human hit and run driver that caused the incident has not been found.
More federal investigations into autonomous driving cars: Waymo and Zoox. Related to the article directly above, Cruise is coming back to the streets. Watch out!
I took my longest drive yet, Santa Barbara to North San Diego County and back over two days, and used FSD the whole time. It mostly performed admirably but there were a couple of screw ups.
There’s a four way stop in Culver City near where my stepmom lives. Instead of stop signs, there are flashing red lights. Poor Tessie stuttered there, stopping at the on and trying to start when it was briefly off.
The surprising one was near my exit in San Diego. I was in the carpool lane and maybe three miles from my exit. It needed to get over but the car pool lane exit wasn’t until past the freeway exit. It wasn’t a walled off car pool lane, you just had to drive over a small hump that was marked with diagonal stripes. I let it play out and it exited two exits late and made its way back to my stop. This is probably more a failure of the map than FSD.
Elmo said that v12.4 will start propagate early next week. (This is distinct from the major UI release happening now). Supposedly a 5X to 10X improvement in interventions per mile. V12.5 in late June with more improvements particularly on the highway.
12.5 is exciting since it extends the end-to-end training to the highway. Highway use is ok on v11–under good conditions, it can go hours without an intervention–but it doesn’t handle construction well, and lane changes are sooooo slooooow. And very hesitant if there are any cars in the vicinity. v12 lane changes on surface streets are very snappy in comparison and it will happily slot itself in any gap that’s large enough. I actually feel that v12 does lane changes better than I do (due to better visibility and being able to look forward and back at the same time). The decision when to make a lane change still needs improvement, though.
Yeah, same. Obviously the claims should be taken with a grain of salt, but Musk’s statements about FSD 12 have been pretty good so far, so I think there’s reasonable likelihood that these come to pass (plus or minus a few weeks). 5-10x improvement would be great if it proves accurate.
I was pretty skeptical about FSD, and even if it were great it’s not like supervising a self-driving vehicle is any less stressful than driving.
After the free trial, I still feel that way but I was really shocked at how good the FSD actually was, mainly how human-like the driving style is. e.g., I was approaching a green light that turned yellow during that phase where you could speed up a tad to get through or slam on the brakes to stop, I was expecting the car to stop but it decelerated for a split second and then sped up to go through the light.
I’ll confess that i knew little about Baidu before mentioning them in this thread since they operate in China. So I’ve been interested in learning more and saw these stories.
That would seem like a big deal since I’m pretty sure none of the other robotaxi outfits are anywhere near profitable. But then i saw this video…
It sounds like they are greatly expanding their fleet in Wuhan, but how much of their fleet is really self driving if it takes four hours of requesting drives before you get a car without a driver? I don’t like to put too much trust in these journalistic fluff pieces, but maybe Baidu isn’t as advanced as it seems.
Separately, Baidu has been getting cozy with Tesla, so maybe they’ll join forces.
With some small changes, the current FSD 12 could do very well in a robotaxi role. Robotaxis are geofenced, so they only have to be verified for a small area. They don’t need to be L5 to be useful when driverless, since they never have to pull into a parking spot or the like (Tesla FSD already does fine with taxi-like pickups and dropoffs). And if they added a remote control system, they can get it out of jams manually if required (just like Waymo, etc.).
Musk did say they were in early talks with an OEM about FSD licensing, though I don’t think of Baidu as an OEM, so he was probably talking about something else.
Some of you Tesla owners out there should probably pay attention to this lawsuit.
LoSavio alleges that for years after buying his car, he relied on “Tesla’s repeated claims that the car’s software was the source of delay, and that software fixes were perpetually forthcoming,” yesterday’s ruling said. But when Tesla declined to update his car’s cameras in April 2022, “LoSavio allegedly discovered that he had been misled by Tesla’s claim that his car had all the hardware needed for full automation.”
Lin rejected Tesla’s argument that LoSavio should have known earlier. “Although Tesla contends that it should have been obvious to LoSavio that his car needed lidar to self-drive and that his car did not have it, LoSavio plausibly alleges that he reasonably believed Tesla’s claims that it could achieve self-driving with the car’s existing hardware and that, if he diligently brought his car in for the required updates, the car would soon achieve the promised results,” Lin wrote.
Lots of interesting legal wrangling going on there, but this probably isn’t the thread for it.
Why should we pay attention to this nonsense? It’s made abundantly clear that the driver has to pay attention and that the occasional unexpected thing could happen. There are checks in place to make sure that the driver is aware like a camera near the rear view mirror that assures that you are facing forward and you have to have your hand on the steering wheel at all times and there’s a check for that if it doesn’t sense that. This lawsuit is bullshit.
The lawsuit might have had more legs a year or so ago, before FSD 12.
The lawyers on one or both sides are confused, though. Specifically, the allegation that:
Although Tesla contends that it should have been obvious to LoSavio that his car needed lidar to self-drive
Tesla has never contended that. Musk and Tesla have always said that LIDAR is unnecessary, and time has borne out that claim. While the cars are obviously not L4 yet, the remaining problems are unrelated to anything LIDAR would fix. They have solved the vision problem. What’s left are the high-level decisions like picking an appropriate speed or deciding when a left turn is safe or whose turn it is at a 4-way stop (Tesla does use LIDAR to gather training data, but they aren’t on the consumer cars).
Tesla has also always said that the cars would be upgraded as necessary. Which they have in fact done for people with the early computer systems. It seems like to claim fraud, they’d have to claim that it was both necessary to upgrade the cars and impossible to do so. Maybe it would be impossible to retrofit something invasive like LIDAR. But the computers can be upgraded and it seems that in this case, Tesla wouldn’t upgrade the cameras, not that they couldn’t. (I have the first-gen cameras as well and they don’t seem to be an obstacle).
I feel like you didnt read the article or even the snippet in the post. It’s not a false advertising claim because his car can’t drive itself, he can’t even install what Tesla calls FSD because his car doesn’t have the necessary hardware and Tesla won’t upgrade it.
I guess they’re promising they have a solution “coming soon” for older models to get to FSD 12 but I believe Teslas promises about nil. And honestly, what’s the lifespan of this car?