Self driving cars are still decades away

Surely self-driving cars follow the verbal instructions of duly authorized law enforcement?

(insert ‘agile’ joke here) Yeah, I know, different auto maker, but same outcome.

It seems to me that supervising someone else’s driving would require more attention and concentration than actually driving myself, and thus be more stressful and tiring. I’d much rather drive myself than have to constantly second-guess a machine.

Yep, someplace upthread I have a message saying pretty much exactly that.

In reality, what happens is that I’ve become familiar with FSD’s quirks and problems, so I will often preemptively disable it when I know I’m approaching an area where it has trouble. That means I use it in situations where it makes driving easier, and don’t use it in situations where it makes driving harder.

All your posts on this topic, whether in this thread or others makes you seem like an example of the ideal user in terms of skills, attitudes, and actual actions out there in your day-to-day driving. And I mean that in a genuinely complimentary way, not suggesting that you’re a BSer full of your own coolitude.

I wonder how much your whole routine is typical of FSD users, or is an extreme outlier. I have to suspect the latter, but I have no way to know. How do “ordinary” FSD users use the product today? Turn it on, ignore it, and have a rude awakening close-call event every 4th commute? Is that changing as more and more Teslas are on the road driven by fewer self-proclaimed early adopter / enthusiasts and more ordinary Joe & Jane Commuters?

I also wonder how much data Tesla’s telemetry can give them, and how much insight they can glean from that telemetry. Do they recognize that you regularly disconnect FSD between this intersection and that one, or is that signal lost in the noise? etc.

Is each instance of a disconnection considered a “vote” that FSD is struggling and so any cluster of votes in space or time deserves further scrutiny? Once they’ve noticed they want to scrutinize location or scenario X, do they even have the telemetry to do that scrutiny? etc.

Yes. Well, I’m not sure about the clustering, but they absolutely look at the disablements. Currently, when you disable it, the car asks you to leave a small voice message as to why you did so. I use this pretty frequently, with messages like “complicated construction work ahead”, “wasn’t comfortable with this left turn”, “the car misdetected the speed limit as 55 mph”, etc.

They undoubtedly also send a packet of information like the current speed, position, some camera information, whether the disconnection happened by wrangling the wheel or via the stalk, etc.

Hey, it’s going to be hard for them to learn to manage edge cases if y’all keep turning it off!

They can visit these places and fix the problems on their own time :slight_smile: . I do live right next to them.

They’ve actually done this before in the case of “Chuck’s unprotected left turn.”

I’ve been one of the biggest skeptics of fully autonomous cars, and as of a year ago I would have said that truly autonomous driving was still decades away. In fact, I’ve been saying that since the topic started on this board more than a decade ago.

After what we’ve seen with ChatGPT and its ilk, I’m no longer sure of that. I think we might see actual safe FSD this decade. AI has come a long way in a very short time.

An LLM isn’t going to drive your car.

No one is saying that. The point is that the rate of progress we’re seeing in LLMs may carry over to the architectures used in self-driving cars.

Tesla says they’re switching to end-to-end AI in version 12. That’s likely a ways off, but personally I think it’s a crucial step. A system with any traditional logic for driving decisions will never work. One based entirely on training may work.

Well, an LLM (of a sort) might drive your car. LLMs are being built now for smaller applications. There is an LLM called Alpaca that has near-ChatGPT performance and runs on a laptop.

The LLM itself is a ‘foundational’ model - it provides a natural language interface and basic reasoning capability. On top of that would be a whole lot of fine tuning or other electronics/computers to enable it to control a car.

This kind of stuff is already in the works. Siemens is building PLC control for factories with LLMs and other AI. I’m sure other manufacturers re as well. And LLMs may not be the only part of the package. It’s possible that an LLM might be used for things like image recognition and a natural language interface, while other AIs or computers do the physical controlling of the car.

Or maybe LLMs won’t be in the mix at all. It doesn’t matter, because we’ve seen what AI can do with the right training, and I suspect other AI specialized architectures will be showing up at an increasing rate.

LLMs could definitely be part of the interface layer. And may even solve the problem mentioned above–have the car move out of the way when firefighters are shouting instructions to it. But I don’t see much place for them in the basic driving component. Well, maybe for interpreting signs with weird parking restrictions…

LLMs aren’t real-time architectures, which would be the big problem with controlling a car. So I see them on the periphery as you say, handling user interface functions and maybe dealing with complex situations that don’t require repeatable, fast responses.

Performance is a problem. An AI used for image recognition in a car would have to be processing images at a pretty good frame rate. And that’s extremely computationally expensive. But Moore’s law is still mostly going (at a slower rate than before), and architectures are improving, so who knows?

I wonder if the comparative lack of discipline in the USA will benefit us in Europe. Two comments above struck me:

“The computer detects a slow-moving car in front and changes lanes to find an even slower car.” In Europe, this rarely happens because, in normal highway driving, the rules do not allow overtaking on the nearside. (That’s not to say it doesn’t happen though)

The discussion about inadvertently pressing the accelerator instead of the brake. Most European cars are three-pedal so the right foot is used for both, even for those of us who drive automatics. I believe that in the USA, it is normal to use the left foot for braking and it is possibly this that causes the confusion in an unexpected situation.

Any autonomous system that can cope with the more difficult driving conditions in the USA should have fewer problems on this side of the Atlantic. How well they will cope in the chaos of places like Calcutta or Cairo remains to be seen.

They’d be great in Mumbai, traffic never moves more than 20 mph anyway.

FWIW: it is, in fact, not normal to brake with the left foot in the USA.

Christ, we have those marked crosswalks around here, and my foot instinctively moved when that car didn’t stop.

The pedestrian’s timing was objectively terrible. That person perhaps intended to cross after the car went by, but they started across early enough that a sizeable fraction of human drivers would have slammed on the brakes, perhaps skidding into the crosswalk, and another sizeable fraction would have done as the car did, proceeding through the crosswalk at their current speed or nearly so.

I grew up in CA. I learned to drive under their laws, and did so for a number of years. Yes, pedestrians have unquestioned right of way always, period, amen under the laws of the state.

But not under the laws of physics. A pedestrian physically can step out in front of a vehicle which physically cannot stop before the collision. A pedestrian can step out at such a time as to force a panic stop and multi-car pileup that they never get touched by. Those are reckless moves. Those are dick moves.

A computerized car has the potential advantage of better knowledge of its own reaction time and stopping distance. It might even know the distance an closure rate of the car immediately behind it. Does it know the reaction time and stopping distance of that human-driven car behind it?

Here in FL now we have similar laws and similar crosswalk signs, right down to the little sign post in the middle of the road. And here in tourist central we have lots of peds crossing at lots of midblock crosswalks. Peds will often step out from the opposite side curb at a time when you physically cannot stop before entering their crosswalk. They are judging their speed of walking across the opposite side of the road such that you will pass the crosswalk before they get to the middle and they will pass safely behind you. IOW, they are approaching the problem as one of two separate crossings, with the goal of minimum delay for themselves.

Whether that’s what the law expects of them is a separate matter. This happens a lot. I can have no quarrel with what that FSD did, since it’s exactly what peds expect cars to do and is exactly what most human drivers do when confronted with that situation. Had there been a second car, human-driven or not, following behind this one at an appropriate distance *ex-*pedestrian, I’d fully expect that vehicle to stop for the ped. But not this one.