Driving along 395 through Owens Valley, the town all have series of speed limit signs. My Tesla does not slow down appropriately. It does slow down, but very gradually, and stays over 20mph over the speed limit. It’s begging for a speeding ticket or worse.
Imagine the conversation: CHP: “Do you know what the speed limit is?” Me: “Yes, I saw the signs, but it wasn’t me driving, it was my car.” CHP: “Does your car know what the speed limit is?” Me: “Yes, its display showed the correct speed limit.” CHP: “And, did the car slow down?” Me: “Yes, it lowered its cruising speed to the speed limit.” CHP: “Then why did I measure you going 20mph over the speed limit?” Me: “My car decided to ignore its own cruise control.” CHP: “Please sign this ticket.” Me: “Yep.”
But I think I figured out why the Tesla FSD does this. It’s because of errors in the navigation system interacting with the highway-surface transition code. There’s a couple spots along freeways in Los Angeles where my car’s nav often gets confused if we’re on the freeway or on a nearby surface road. The car will switch it’s “speed limit” indicator from 65mph to 25mph. I ignore it, because I’m smart that way. And the FSD ignores it too. I’m thinking the refusal to slow down is its “least bad”* way to handle discrepancies between its visual sensors and its nav/map system.
*“Least bad” being slow down gradually while evaluating further, instead of taking extreme action.
Glad to see I’m not the only one experiencing the car not slowing down even though it acknowledges the lower speed limit. Well, not really glad, but it is nice to know it’s not unique to our car.
I tested 12.3.6 for about an hour today. (If you would have told me last year that I’d take a drive for fun I would have laughed in your face). The street driving was less herky jerky, it exited the freeway less abruptly and it learned how to correctly make that one left turn out of my neighborhood. Of course it all could be in my mind
Hyundai’s robotaxi partner is slowing things down and restructuring. “Slowing things down” includes a second round of layoffs, and stopping their pseudo-robotaxi Lyft/Uber testing in Vegas, as well as UberEats.
Employees were told that the stubbornly high cost of running commercial operations coupled with pricey autonomous vehicle technology components makes the business case challenging today.
And in more news, the NHTSA is demanding all data and documentation on full self driving. The article is written like the NHTSA getting the informaiton is going to happen, but I wouldn’t be surprised if Tesla ties it up in courts. As far as I’m concerned, if Tesla is testing this on public roads, and selling it to the public, then the information should be public. Same for every single other bit of software that runs in cars.
I don’t entirely disagree… but what’s the limit here?
Every new car sold today has “black box” type software running that tracks the control inputs and state of the car continuously, and stores it permanently in case of accident. Should that be public information?
Ironically, a giant data dump from Tesla may be the least useful among all automakers. Because FSD 12 is contained within a blob of neural net weights, and our ability to understand those is very limited. Human written software is easy in comparison, but at least for the surface street stack, that’s now just a simple program that executes the NN.
But that’s not what the NHTSA is asking for. They want a crapload of data, but a data dump of neural net blob info isn’t part of it. A lot of the requests are of the nature of “explain and describe in detail the process, engineering and safety explanation and evidence for design decisions regarding ____,” plus details on all recorded crashes, communications within the company, stuff like that.
I was just responding to echoreply’s thought that every bit of software in cars should be made public. I think that’s too much to ask for, but at the same time I think we do deserve more transparency across the board.
No limit. If your software gets to control dangerous machines, produce evidence for courts, run medical equipment, etc., then it should be public to look at, along with information about testing and validation. This isn’t completely unprecedented, for example breathalyzer source code in some DWI cases.
I know this will never happen, with companies lobbying hard about trade secrets and stuff[1]. I’m sure my views are rooted in my 90s era Free Software and techno-optimism upbringing, but it’s still what I think should happen.
I think the control software should be public. The actual saved data should be private, excluding court cases and such, where Public interest overrides privacy. What really should be public is validation that the saved states represent the actual states.
I think the useful bit will be data collected from FSD. The raw numbers of what Tesla knows about FSD’s decision making and behavior.
boo hoo, suffer the stock hit from competition, and let everyone else benefit ↩︎
I’m not sure how this could be done aside from self-consistency. For example, that the wheel speed sensors match the GPS speed data, as well as roughly match the steering data (outer wheels spin faster, etc.). What validation could be done aside from that?
That doesn’t seem too interesting without the input data. Which is the full set of video feeds from the past N seconds and all other sensor data.
Even metrics like the disengagement rate aren’t too interesting without knowing the conditions under which they occurred and how they change over time. For example, it might not actually improve for long periods, not because FSD isn’t getting better, but because it does get better and people use it in more difficult situations.
I really hope that Bosch, etc., test that their software properly knows the car’s state, and that when it saves that state the recorded information is accurate. I mean basic debug kind of stuff. Does the software do what it claims to do type validation. That is the information I think should be available. So, for example, we’d know if sometimes it didn’t properly adjust fueling when undergoing emissions testing didn’t engage ABS despite wheel speed mismatch.
Yes, for much of it context is going to be important. Risk of misinterpretation is always going to be a problem, such as looking at disengagement per mile or something too simplistic. Right now Tesla (and the rest of them?) are keeping all of the data secret, which is a big problem.
I don’t understand. The sensor transmits what it thinks is the correct data. How would it know if the data is wrong? Obviously there are issues like transmission errors and I’m sure there are checksums and the like to ensure there was no corruption. But beyond that, what sort of checks are you thinking of?
Sure, there are some checks like “did ABS kick in when it was supposed to?” that could be done, but that kind of stuff is very reliable already.
I eagerly anticipate another Reuters hitpiece where they totally misinterpret the data and strip it of all context.
One thing I’d like is to convince FSD to get into the required lane earlier than it does. Today it needed to be in the left-most lane to enter a dedicated left-turn lane, and it waited until the very last moment. It squeezed through a gap in traffic with no problem, but it could have avoided that if it just got over, say, 1/4 mi in advance.
In initial validation, external checks. We all write software, and sometimes it does the wrong thing. All I’m talking about is records of the testing done. If recorded logs are admitted in court, then you sure do want some answer besides “trade secret” that the software does what it is supposed to.
No, that would be Biden administration harassment .
I guess. Well, as you say, it’s never going to happen. Or rather, the only way the code will ever be shared is under subpoena, and that will only happen under a class-action lawsuit or the like. And still be under NDA.
There was an Airbus accident (I think it was AA587 but may have been AF447) where one of the findings was that the flight data recorder was receiving Kalman-filtered data (so smoothed out over time), not the absolute raw data required by regulation.
It was determined this wasn’t nefarious, but was simply the most convenient source for the data the recorder required, and nobody with enough domain knowledge had noticed that data wasn’t the right kind. But the unacceptable problem data still had to be fixed, and that ended up adding some totally new data paths amongst all the various black boxes and software modules to bubble the no-kidding raw data a lot farther up the abstraction chain than they’d architechted the system for. Ouch.
Turning back to cars …
I could see lots of ways for the equivalent of “dieselgate” code to exist in the crash recorder software so particularly damning scenarios don’t get recorded. Maybe really important things like accelerator pedal position ought to have dual sensors and record the output of both. etc.
Ultimately @echoreply’s idea for all software and its deelopment, QA, and management environment being public is a pipe dream. Heck, why aren’t the detailed building plans and structural analysis for every stall building required public knowledge too? What about all maintenance records of every moving object from building elevators to cars to ocean-going freighters.
If every business and every person was required to play the entire game of life with all the cards face up we’d live in a very different world. Which sounds great in a utopian sense. Until you recognize that in that world, the advantages to being able to cloak your data and cheat on well … everything would be enormously more valuable that it is now. Criminals would be gods in that world.
Interesting. I wouldn’t expect that to be nefarious since the manufacturer has an interest in storing the unfiltered data. They would want more than anybody to be able to diagnose where a problem arises (bug in the Kalman filter, etc.).
An excellent point, and one I hadn’t considered before in a broad sense. It’s obvious in specific scenarios like “why don’t we share our nuclear secrets with Iran?”, but the same thing would be applicable even in mundane situations.
To keep this marginally on topic, I totally expect to someday be part of an FSD class action. Definitely not as a lead plaintiff. Possibly because it’s never been delivered as promised, or because hardware 3 can’t actually run it.
Really, I’d much rather have level 3 (or 4!) self driving than my money back, but I still don’t expect it this decade.