Self driving cars are still decades away

Oh, that may be the difference, then. Here in Dallas/Ft Worth, there are bike paths, but most bicyclists have to traverse the normal roads with cars to get anywhere useful. There are often bike lanes on those roads, but they’re not everywhere. Also, exceeding the speed limit on a lot of local roads isn’t impossible on a bicycle. You’d usually have to work real hard to do it, though.

I pretty much think any V2X is a dead end, if not geofenced. Remote controlling/informing of cars would require funding/building a massive network of computers that would need to do an incredibly complex set of calculations & then communicate them instantly to every car, as well as manage handoffs/overlaps between areas of control, as well as have an eye on every possible source of obstruction whether tagged or not. Alternately, giving every car complete information from tags/stationary observation, and then asking each to make decisions (while contemplating the decisions of the cars/other items around them), is way more complicated than just using the car’s own observations (i.e., the current autonomous model). And finally, every car on the road sending your car information that you have to then collate from every (moving) source & make decisions on (+ again understanding the behavior of the cars around you) is computational chaos. Imagine the Bourne rotary at rush hour!

Add to all these the issue of the non-participant - if Uncle Ed takes his '74 Cutlass on the road, everything gets more complicated. And the average age of a used car in the US is 12 years, up from 9 years only a decade ago…as slow as ADAS has been to get acceptance in market, this would be substantially slower.

I’m making a lot of definitive statements, and I’ll note this is just my set of opinions. But I’ve seen optimism on autonomous vehicles crash, so to speak, on lower technical complexities than these.

[on a side note, Trump killed a proposed Obama-era V2V standard for all new light vehicles, and AFAIK Biden hasn’t revived it…as someone noted upthread, V2X is getting way less interest than it did a few years back]

I did a little analysis of the NHTSA crash data, which is available here:

This is a larger (longer timeframe) dataset than what the media reported, so there are 765 total incidents.

The top numbers per make:
Tesla: 549
Honda: 100
Subaru: 37
Acura: 30

The rest had <15 incidents, so I’ll mostly ignore those. Clearly, Tesla is way out in the lead. But then I looked at the data source.

Of Tesla’s 549 incidents, 501 were reported by their telemetry. The unreported incidents were presumably fell below the threshold for automatic reporting (no air bag deployment, etc.) or were so severe that the telemetry system failed. But interestingly, 478 incidents were only reported via telemetry. If nothing else, that tells you that the non-telemetry sources (law enforcement, consumer complaints, etc.) are not very reliable (in the sense that the data actually makes its way to the NHTSA).

Consider Honda with their 100 incidents. How many were reported via telemetry? One. Just one; all the others came from traditional sources.

So Honda has 99 incidents which came solely from traditional sources, while Tesla has 71. The difference is entirely due to Tesla actually measuring the behavior of their system.

In fact, the total number of incidents (across all makes) reported via telemetry is 519 (compared to Tesla’s 501). Virtually no one aside from Tesla seems to have any idea what their systems are doing. Subaru did the second best with 10 out of 37 incidents being reported via telemetry.

It’s basically the same problem as LIDAR. You need a robust local visual processing system to handle the case where V2X fails (non-participants, incorrect data, etc.). But if you have a robust visual processing system, you don’t need V2X. Maybe there are some narrow scenarios where it helps a bit, but it doesn’t actually make the problem easier.

Report: Tesla reportedly laid off about 200 workers in its Autopilot division

That can’t be helping to get his always-one-year-away self-driving cars working.

Unless he’s finally done and doesn’t need that many folks in maintenance mode. Yeah, I’m not buying that either.

At this point, nothing he does surprises me - for example, he might lay off those workers, and then immediately hire a hundred more in Texas.

Supposedly they were data labelers. I think that means they respond to captcha requests all day, clicking on the traffic lights, fire hydrants, and bicycles, then the AI system learns from that.

lol

A man’s Tesla tries to take on a train while on FSD. The event itself happens in minute 16, but things get pretty interesting starting around minute 11.

I will say, most of these Tesla guys, they’re just so calm. “So, I’m going to send that over to Tesla… But that is not okay.” :sweat_smile:

FSD really has a hard time with left turns and oncoming traffic. I can imagine light rail is going to confuse it when it can’t even get “cars and trucks” properly. The reason he’s calm is because he knows it’s going to make a left turn, so he’s certainly covering the brake and ready to take the wheel away instantly. That is the only way to operate the car with FSD.

I’ve taken mine into downtown Denver, too, and it’s mostly fine right up until it isn’t. Straight, stop for lights, follow traffic, even make a right turn all are fine. Then it will bug out for left turns, or just get in the wrong lane for no good reason.

Well, now Andrej Karpathy’s sabbatical has turned into his departure from Tesla. Hard to tell what’s going on there with FSD.

https://www.reuters.com/business/autos-transportation/teslas-ai-director-leaving-company-after-4-month-sabbatical-2022-07-13/

Karpathy probably made 10s of millions in the time he was there. Musk would be a horrible boss. Not necessarily anything about the tech that drove him off.

Karpathy had been on sabbatical for something like 4 months prior. Not too surprising that he’d gotten sick of the stress. But also evidence that the team can continue without him.

He never really struck me as the director type. I’d guess he’d rather get his hands dirty on stuff. What techie types call “individual contributor” rather than “management”.

If you are one the the Utmost Inner Circle Keyest of the Key of the “individual contributors”, and about now it’s becoming evident that the underlying deep architecture of your solution is hitting insuperable limits, and you’ve already got X hundred thousand sets of hardware out there you’ve promised to maintain going forward solely with software updates, well …

It may be time to throw in the towel and hope like heck not to get sued by your former employer for painting them into a corner.

Shrug. For whatever other flaws he has, Musk seems to take responsibility for the overall architecture. The no-LIDAR thing, among others. Which isn’t to say he won’t fire people that aren’t pushing them forward. But sues? Unlikely, unless he was stealing stuff.

Personally, I think they’re the only ones that are remotely on the right path. Anyone depending on hi-def maps has failed before they’ve even started; it’ll never be self-driving, almost by definition. You might end up with a useful local robotaxi (see WayMo), but not much more.

Musk has also made it clear that they view the process as fighting themselves out of a series of local minima. Make improvements until you can’t anymore, then figure out how to rearchitect (or throw more horsepower at the problem) until you can put yourself on a more promising path. Rinse and repeat. They’ve done this several times already.

There’s a good chance all the extant cars will need a HW4, HW5, new cameras, whatever to really achieve self-driving. Not to mention their offline learning setup (their Dojo supercomputer). But they also seem to be in it for the long haul.

The current FSD beta is remarkable at times, and dumb as dirt at other times. But the cases that it gets right are promising, but it has a long way to go. Everyone else seems to be barely aware of how far they have to go. If they think better LIDAR is the magic ingredient… they’re wrong.

I’m not deeply invested in this case.

I’ve just seen flailing projects lose the high Muckymuck a time or three because, at least organizationally, if not necessarily technologically, the sunk investment in that individual, his team, and his ways was seen (rightly or wrongly) as an insurmountable obstacle to the next move. Whether that next step making real forward progress or just getting unstuck from a hole.

I wonder how fast both were going.

Right around the time GM was starting this up, I asked one of our consultants who was working with us on this if they could estimate how much money had been spent so far on developing autonomous vehicles. He said that it would be tough to do, “but that it would pale next to the kind of revenue it might generate.” In my mind, there are still lots and lot of ifs around that, though.

Chief Executive Mary Barra said on Tuesday she is still bullish on Cruise, and reaffirmed a forecast that the unit could generate $50 billion a year in revenue from automated vehicle services and technology by 2030.

At least she’s learned the lesson of not putting it just five years out.…

Bias warning here, but software engineer Dan O’Dowd has created The Dawn Project to highlight unsafe software in critical systems, and their first target is to get Tesla’s FSD banned. On top of a full page ad in the NYT, they’ve released some pretty dramatic footage of a Model 3 mowing down child-sized dummies in a failed automatic braking test.