Self driving cars are still decades away

They won’t be “LLMs” anymore. Unless we’re just going to start calling anything AI related an LLM.

When I was part of a team trying to figure out the potential macroeconomic effects of true self driving cars, I would often get into conversations like this (full disclosure: I was by far the most pessimistic person on that team).

“Look at AlphaGo, and how quickly it solved complex games,“ they would tell me. “Look at this car driving itself to the Brandenburg gate.” And, of course, fucking McKinsey was always there backing them up. But I’ll tell you two things:

  1. No matter how complex people think this is, it’s more complex than that.
  2. Even if it were theoretically possible to approach true self driving, the amount of capital necessary to do so would be, and will continue to be, astronomical. The appetite for that is clearly receding, and realistically, how many billions of dollars of capital have already been invested in robotaxis, and how many decades before that returns anything meaningful?

Terms can change. Who cares? I no longer know what you are trying to argue.

New models are a combination of many modalities, with embeddings and fine tuning to enable specific functions. The exact mix of models that might be in a car in the future is unkmown. but it seems clear that the capability for human-level judgment by AI in driving will be here at some point in the forseeable future.

If you want to debate that, come back with something more substantial than terminology games.

I don’t disagree with any of that (except maybe cost, which is coming down rapidly), and I’ve been saying on this board and elsewhere for at least a decade that you won’t get true self-driving without something approximating a true general intelligence. And I thought AI was decades away from AGI, and might never achieve it.

Your point #1 is one I have been harping on forever, and it applies to any complex system, not just AI related. A characteristic of complex systems is that they can look simple from a distance, but when you drill in they reveal layers of complexity, and the deeper you go the more complex it looks. As opposed to a merely complicated thing, which can be broken down through a process of reduction. Understand what each part does, and you understand the whole thing. That doesn’t work with complex systems.

That’s why specific intelligences like Alphago, or programmed expert systems, or other rules-based systems were never going to be the answer. You need human like judgment that can be applied to novel situations the developers never considered. Until recently, that seemed like a pipe dream.

I have a similar train of thought:

dont compare AI driving accidents to “overall human accidents” … compare them to “human accidents between 6am an 10pm” …

No need to bring in the worst of the worst (e.g. late night DUI drivers) into statistics … at those hours I refrain from driving as much as I can.

Let’s compare responsable drivers w/ AI … but not with statistics contaminated by criminals…

We should level upwards and not downwards. …

You are referring to a video to support claims of Tesla FSD actually working? in 2024?

We urgently have to speak about some amazing investment opportunities in water Xing tech.
it is disruptive! and totally not racist! 3/4 of my friends are jews!

a random thought about FSD cars … they increase your max. passenger count by 20-25%

Maybe even 50%!

You guys have personal drivers or something?

Maybe they are thinking of taxis. But if you are johnny cab, you still occupy the driver’s seat for some reason.

Post from tesla about using foundation models in their dars and robotics:

Under that post are more videos and comments detailing training requirements, etc.

Moderating:

Please address the post and not the poster. This paragraph is over the line.

But isn’t San Francisco high-tech and not filled with Luddites?

Forget it Jake; it’s Chinatown.

As you can see from the title of the video, this YouTuber agrees with the OP. He is a musician working with AI, but owns a Tesla and a Kia that he has set up with open-source self driving tech.

He covers a lot of ground in this 25-minute video. He starts by outlining the levels of autonomous driving, reviewing the expectations people have of FSD (largely thanks to Elon’s highly exaggerated claims over the years), then counters with the reality, including the incidents of FSD running over child mannekins, including tests he conducted himself.

He points out that driver assist technology has probably reduced accidents in cars equipped with it by as much as 20%, but looking at some of the stats, he concludes that Tesla FSD, far from being “safer than humans” is in fact almost ten times worse. (See the chapter “Tesla’s are more dangerous” at 14:12.)

His interest in this subject comes from his work in AI, and the fact that he drives 25,000 miles a year going to and from musical gigs. After an accident caused by falling asleep at the wheel (he wasn’t badly hurt) he decided he wanted a vehicle with driver aids that would prevent that from happening again.

He decided that Teslas aren’t capable of doing what he thinks is necessary (in part because Elon has decided to rely exclusively on cameras) and started modifying a Kia van with OpenPilot tech. Even this, he says, isn’t safe for use on public roads.

He concludes with some thoughts about how autonomous driving tech should be developed and regulated.

I just happened on this video and thought people here might be interested. I’m not technically savvy enough to support or refute any of his claims or experiences, so I won’t try.

He attributed 17 deaths to FSD… but then when I went to his spreadsheet, he’s counting deaths attributable to FSD and Autopilot. But then he divides by the number of FSD miles only. Well, that’s absurd obviously, since Autopilot is used vastly more than FSD. Autopilot has something like >3 billion miles total, while he’s using a figure of 150 million as the divisor. As far as I know, zero deaths are currently attributable to FSD.

Well, if a car modified with open source AI by a musician isn’t safe, what hope do multi billion dollar automakers have?

Sure, you can make a snarky comment without, I’m guessing, actually watching the video, based only on my clearly less-than-perfect summary.

Or maybe, like @Dr.Strangelove, you could watch the video and come up with a valid critique of what he actually said and did.

Admittedly, I had to bite my tongue :slight_smile: . Not that I have anything against musicians.

I was turned off by the film grain effect. It struck me as manipulation, like when TV programs do that fade-to-grayscale thing when they want you to think someone is a child molester or whatever.

I only watched a couple of minutes at the point you linked to. Maybe I’ll watch the rest later, but that segment didn’t seem promising. Plus, the title seems… unlikely. Really never? Years, decades, more… but I don’t believe never.

Personally, I think Tesla has already been proven right about the camera thing. FSD is still a ways from general enablement, let alone Level 5 or whatever. However, based on my own use, and especially footage from FSD 12, when FSD makes mistakes, it’s not due to failing to see something. It’s because it failed to interpret the situation correctly, especially when it involves humans doing something weird. That’s not something that will be helped by LIDAR or radar or ultrasonics or other sensors. It already knows what the environment is like–it just has to do what a human would do given the same information.

Maybe it had an anti-Waymo or anti-Google bumper sticker?