Self driving cars are still decades away

I’ve been in software development over 25 years. I know all about missed schedules and plans that change. That’s no excuse for not coming up with a plan.

When an engineer gives me a timeline with some form of plan, I can look at the individual steps and evaluate which estimates are realistic and which are wildly optimistic. If one came to me with an 18 month estimate, and when I asked for his reasoning, he told me he made it up, I’d tell him to get the fuck out and come back when he actually had a plan. I wouldn’t say “oh well, estimates are often wrong, that’s fine.”

If you don’t know how long something will take (which is totally fair), the proper answer is “I don’t know,” not “let me make up a number for you.”

Approaching 30 here…

I can reliably identify the wildly optimistic ones. I can’t identify the reliable ones, and I’ve never seen evidence that anyone can. And I really can’t identify the steps that should be there but aren’t.

So the actual complaint is that he made a prediction, not that his predictions are bad.

Compiled elsewhere on the web…

December 2015: “We’re going to end up with complete autonomy, and I think we will have complete autonomy in approximately two years.”

January 2016: “In ~2 years, summon should work anywhere connected by land & not blocked by borders, eg you’re in LA and the car is in NY”

June 2016: “I really consider autonomous driving a solved problem, I think we are less than two years away from complete autonomy, safer than humans, but regulations should take at least another year,” Musk said.

March 2017: “I think that [you will be able to fall asleep in a tesla] is about two years” -

March 2018: “I think probably by end of next year [end of 2019] self-driving will encompass essentially all modes of driving and be at least 100% to 200% safer than a person.”

Nov 15, 2018: “Probably technically be able to [self deliver Teslas to customers doors] in about a year then its up to the regulators”

Jan 30 2019: “We need to be at 99.9999..% We need to be extremely reliable. When do we think it is safe for FSD, probably towards the end of this year then its up to the regulators when they will decide to approve that.”

Feb 19 2019: “We will be feature complete full self driving this year. The car will be able to find you in a parking lot, pick you up, take you all the way to your destination without an intervention this year. I’m certain of that. That is not a question mark. It will be essentially safe to fall asleep and wake up at their destination towards the end of next year”

April 12th 2019: “I think it will require detecting hands on wheel for at least six months… I think this was all really going to be swept, I mean, the system is improving so much, so fast, that this is going to be a moot point very soon. No, in fact, I think it will become very, very quickly, maybe and towards the end this year, but I say, I’d be shocked if not next year, at the latest that having the person, having human intervene will decrease safety. DECREASE! (in response to human supervision and adding driver monitoring system)”

April 22nd 2019: “We expect to be feature complete in self driving this year, and we expect to be confident enough from our standpoint to say that we think people do not need to touch the wheel and can look out the window sometime probably around the second quarter of next year.”

April 22nd 2019: “We expect to have the first operating robot taxi next year with no one in them! One million robot taxis!”
“I feel very confident predicting autonomous robotaxis for Tesla next year,”
“Level 5 autonomy with no geofence”

May 9th 2019: “We could have gamed an LA/NY Autopilot journey last year, but when we do it this year, everyone with Tesla Full Self-Driving will be able to do it too”

April 12th 2020: How long for the first robotaxi release/ deployment? 2023?
"Functionality still looking good for this year. Regulatory approval is the big unknown.

April 29th 2020: “we could see robotaxis in operation with the network fleet next year, not in all markets but in some.”

July 08, 2020: “I’m extremely confident that level five or essentially complete autonomy will happen, and I think, will happen very quickly, I think at Tesla, I feel like we are very close to level five autonomy. I think—I remain confident that we will have the basic functionality for level five autonomy complete this year, There are no fundamental challenges remaining. There are many small problems. And then there’s the challenge of solving all those small problems and putting the whole system together.”

Dec 1, 2020: “I am extremely confident of achieving full autonomy and releasing it to the Tesla customer base next year. But I think at least some jurisdictions are going to allow full self-driving next year.”

Jan 1, 2021: “Tesla Full Self-Driving will work at a safety level well above that of the average driver this year, of that I am confident. Can’t speak for regulators though.”

Jan 27, 2021: “at least 100% safer than a human driver”

My favorite of the above:

We will be feature complete full self driving this year. The car will be able to find you in a parking lot, pick you up, take you all the way to your destination without an intervention this year. I’m certain of that. That is not a question mark. It will be essentially safe to fall asleep and wake up at their destination towards the end of next year

That was early 2019. Elon is full of horse shit. That is not a question mark.

No, that is not the complaint. I think my post spelled it out much better than your selective clip.

But everything else is conditioned on that. If he repeatedly refused to make any prediction, you would apparently be completely satisfied.

I’m not sure what you mean, or what my satisfaction has to do with it.

If someone refuses to make any prediction, I will think they don’t really know what they’re doing. If they say something like “here are the steps I do know which should take ___, and here’s what I don’t yet know, which could be anywhere from ___ to ___, and we should be able to narrow that down after figuring out ___,” then I’ll have some confidence in their abilities.

And if they give me a made up number despite not knowing what’s needed, I’ll think they’re a liar and anything that comes out of their mouth is worthless.

I don’t know how you don’t know what I mean, so… I guess I’ll drop it.

Anyway, it’s all so tedious. I don’t even disagree with claim that he’s really bad at predictions! It’s just that the conversions always end up in the same dreary loop.

I’ve owned my Model 3 for about 7 years now. In the past year-ish, I went from the car doing some degree of autonomy for about 50% of my miles while having to hold the wheel 100% of the time, to handling 95% of my driving miles and not having to touch the wheel. That’s a dramatic advance, and regardless of what was promised, it’s past the threshold where I’d pay for FSD now if I didn’t already have it. I don’t need to look at anyone’s predictions to see that there’s been a step change.

If you find it tedious, then there’s no need to defend his made-up estimates with excuses like “all estimates are bad” or “but there are unknowns” or “you’d only be happy with no estimates.” It’s not that he’s making bad estimates; the issue is he doesn’t know what is needed and he’s lying about it. If you agree with that, then the excuses are unnecessary.

I agree, it is much better than it was 5 years ago, or 2 years ago. The improvements are impressive. We can have productive conversations about that without giving any credence to any of Musk’s “estimates.” Which was the point I made way back at the start of this exchange.

It’s even more frustrating in retrospect, because it’s clear (now) that without advancements in both machine learning hardware and software as well novel AI model training techniques, Tesla had no viable path to a self-driving car when Elon started making his predictions. And the same people defending these nonsensical timelines back then will now say things like “Back in 2018, how could anyone come up with an accurate timeline at all?

Here’s the thing – In 2025, how can anyone come up with an accurate timeline at all? NOBODY KNOWS HOW AI IS DOING ANYTHING! We’re just feeding it data and watching its progress, but there’s no timeline out there were we can even predict that X amount more data or X amount more processors or X amount of new training methods will cause this thing to turn the corner. Could it be 3 more weeks or 3 more decades? Nobody has a frickin’ clue.

Anyway, speaking of turning corners…

Per the comments on r/TeslaFSD, the latest version is too aggressive at avoiding mismatched pavement splotches; the linked videos are of a rather extreme example but it seems that this just Tesla engineers adjusting various weights in their models, pushing out updates, and letting users beta test with their lives. It’s bonkers.

Surely, in making such an accusation, you can point to the 2018 era posts where I said those things.

I did find this ancient post of mine talking about Autopilot, though… well before it was a shipping feature!

Looks like I was more skeptical then about timelines! 2013+20=2033.

As a counter-example:

That’s the same video that steronz linked to. I’ll withhold judgment until there’s some kind of verification. About all I can tell from the video is that it actually was a Model 3.

The driver shared the data from Tesla. Here’s the report:
Imgur
Imgur
Imgur
Imgur

This reads pretty plainly to me that the driver jerked the wheel hard to the left, FSD disengaged, and the obvious happened.

The crash happened at 20:40:31.7. FSD shuts off at 20:40:29.1. Just before 20:40:29 (see the purple graph on the fourth page), the steering wheel shows torque to the left, which peaks at a fairly high value a few hundred milliseconds later.

Why would the driver do this? I dunno, but the data seems pretty clear.

Tesla has access to the cabin camera data as well. Apparently they won’t share that, but they could certainly show it to insurance or whoever.

Just read through the thread on r/TeslaFSD and I would say the data is clear as mud. Plenty of armchair analysts confidently stating that this data shows driver torque and not FSD-applied torque but offering nothing but speculation about why that might be true.

Couldnt this data also be showing FSD applying torque to the steering before shutting itself off? Many comments seem to think so.

Also, lots of threads over there describing the same behavior with their own cars.

I do not know about Teslas. But this issue is managed correctly in the flight data recorders in big airplanes. There is a different sensor channel for crew inputs to steering versus autopilot inputs to steering.

In airplanes steering position sensing is of necessity ambiguous as to whether the position is the result of human inputs, computer inputs, or both (whether reinforcing or offsetting).

It would be a dumb design to have torque be equally ambiguous when it need not be.

I’m sure Tesla is capturing the data separately, the issue is this particular report (conveniently generated by Tesla themselves) doesn’t seem to clarify it.

Here is Tesla FSD 13.2.9 mowing down a fake child that steps into the road from in front of a school bus with the lights flashing and the STOP arm out.

To be fair the bus was pulled off the road, but I think most human drivers would have A) Not passed the bus and B) Saw the kid and been ready for them to run out into the road.

No driver Teslas are driving around Austin in anticipation of their Waymo competition starting on June 12th.

Because that’s what the FSD system uses to determine disengagement. And, historically, what it used to ensure the driver was paying attention (they use the cabin camera now).

It’s also the number that gives maximum ass-covering to Tesla and minimum insight into their internals. They undoubtedly have many more data channels available for internal use. But user-input torque is the only one that’s useful for determining if the user screwed up.

There was one minor point of ambiguity, which is that the chart shows the “Autopilot” status going from FSD (supervised) to “unavailable”. That seemed kinda weird but greenthonly (known Tesla hacker) confirmed that this is expected behavior when FSD is disengaged with steering.

There is one other odd thing about this, which is that there is zero brake pedal application. I’m not saying the accident could have been avoided. But there was approximately 1.5 seconds between when the car started steering left and when it hit the tree. Surely if the driver was even minimally paying attention, they could have started hitting the brake.