This is a very valid point, but I think it doesn’t address the bigger question of how to inform the human operator of the what the car (as a whole) knows. What’s the best way to give the human the sensor data? What is the car capable of seeing or possibly missing?
I don’t know the answer. The Tesla approach of integrating it all together into a simulated view of the surroundings is one way. Warning indicators are another way. There’s going to be a lot of subjectivity in what is better.
Compare a self-driving car to a chauffeur. The passenger can ask the chauffeur about driving decisions. The chauffeur might lie or give simplistic answers. But that dialog is the basis for building trust, even if imperfect. A self-driving car needs some way to answer questions in order to build trust. Even if the sensor view is not what the driver code is using.
Why not both? That’s what Tesla does. I don’t look at the cartoon. I look at the side camera view and the flashing red light indicating that it’s not safe to move over yet.
the problem - as I understand it - is the AI-based engine of the FSD … which nobody completely understands and hence cant correct (given that it is a non-deterministic system (again, AIUI)).
so, you cant correct codeline nr. 12,118 … and be good … that really might become a liability for them, especially if you have to prove something (your SW is “safe”) … Code based SW can easily be checked … a huge near-black-box called AI-engine - not so much
I mean it more from a legal/administrative/authorities POV (and not so much a Joe Doe driver)
… just think back to your past carreer … Boeing telling FAA that its in-flght-autopilot handles situations a certain way, they cannot guarantee, but its the way, AI deems best …
how would FAA handle such a fluffy answer to a very specific question (especially after a fatal accident)?
you just need the Tesla equivalent of two 737max desintegrating in mid-air (say, 2 or 3 Teslas hitting something hard in the same week, bonus points if it happens to be documented on a video feed or so and makes mainstream news) … and they might be in a pickle, as they might be legally grounded (due to an abundance of caution from the competent authorities ) … and they might not have a way to demonstrate that they have improved their SW.
I guess I am talking about a systemic-negative-feedback-loop, if you catch my drift
There’s no way to have total certainty, but they can certainly demonstrate reasonable confidence. Tesla saves video (and telemetry) for any incident that happens and can see what’s going on. They can feed that back into the system to reproduce the error, and ideally recreate the entire scenario in their simulator in order to root-cause the glitch. They can ask the fleet for more examples of the same type of scenario and verify those as well. And finally, when they fine-tune the model further to handle that particular scenario, the can feed in all of those same examples (and more) to verify that they have a proper fix.
Sure, it is still a black box and they can never have complete confidence. But they can do a much better job than most investigations, which can often barely determine if (say) a stuck accelerator was caused by the driver or a floor mat.
FAA and the industry is struggling right now with this very issue. All the self-flying airplanes and UAM helo-thingies currently in various stages of design or prototyping will be flown by AIs.
The analysis process for certification, and yes, for post-mishap, of a putative AI goes totally contrary to the classical “clockwork universe” if you will approach to engineering metal moving-part machinery.
SW in general already suffers from the practical impossibility to produce fully analyzed bug-free code. Formal capital-C Computer Science tilts in vain at the problem of correctness proofs. Hell, here’s enough micro-software running inside modern processors that it’s silly to call the chip in your laptop or phone “hardware”. There’s hardware in it, but your app’s (or OS’s kernel) instruction stream encounters a much more varied and complex environment than just a bunch of simple easy-to-analyze transistors.
OTOH, when the meatware screws up and dies in the process so we can’t directly ask them what they were thinking, somehow the mishap still gets investigated, lessons are still learned, and training or procedural changes are made.
Forensic engineering will develop a new specialty for AI and we’ll move on. Albeit probably not without some expensive learning opportunities in this brave new uncharted territory. A la the De Havilland Comet. Who knew it could do that?
I thought people might find this interesting. It’s one robocar expert’s opinion of robotaxis and where the various players are at relative to Waymo. The author is not entirely unbiased, though, as part of their expertise comes from working at Google/Waymo.
The chart isn’t outright wrong (though they were giving limited driverless rides to the public as part of the robotaxi event, so I dunno how you compare that with #6), but I would say it presents a misleading picture at the “serious scaling” threshold. Waymo’s tech requires hi-def maps that have to be generated per city and have to be constantly updated. It also requires a sensor suite on the car that costs ~$100k and can probably never be integrated into a consumer system. Tesla on the other hand already has the hardware on millions of cars, and as soon as they have a working system it will instantly be working on every car in the fleet.
So it’s entirely possible that Waymo will be stuck in their current position for a long time–or even forever–while Tesla will simply get better and better, until they reach “level 12”. And which point they skip 13 since it is meaningless for them, and they are “done”.
The linked article goes into some of that in that there are advantages for the “second movers” such as being able to take advantage of newer processors and engineering breakthroughs. I don’t think the robotaxi event really changes anything for Tesla, though, as those weren’t really taxi rides, just a demo of driving from preselected point A to preselected point B and their system in general still operates at less than 1000 miles per incident.
Also, remember this chart is specifically about robotaxis, not self-driving generally. Tesla could master FSD tomorrow, but there are a ton more logistics that go into operating a robotaxi service. Those systems take years to put together and some of them rely on having a working driving system first. Musk’s prediction of having a Tesla robotaxi service available next year is almost certainly incredibly naive (as has been the case with many of his predictions).
Ahh, I missed the linked article (the color theme I’m using makes links almost black).
I’d disagree that the logistics are a problem at all for Tesla. It’s just a fairly minor problem compared to all the other stuff that they already do at scale, like their car software deployments or the Supercharger network. They’ve probably been working on all this stuff quietly in the background anyway. It’s not something they’re going to start on once they reach the right reliability threshold.
The reliability of the basic system is really the long pole–driving up the miles per intervention. I think their schedule will depend entirely on that and everything else will be in the noise. It remains to be seen if the current approach runs out of steam or not–they’re making rapid progress now, but who knows, they might run into some bottleneck.
They will probably start in limited areas but again, because they don’t use HD maps, it should be vastly easier for them to scale to larger areas. Waymo doesn’t have the resources to map out every major metropolitan area in North America, but they’ll have to when they want to really scale. With Tesla, they’ll just… turn it on (assuming they have actually hit their reliability thresholds). So they’ve made things very difficult for themselves early on, but assuming it works out the later parts will be easy.
This. I never saw the link either. It’s functionally invisible in the most common theme.
IMO slipping a link in under a context word in a sentence is simply not good form here on Discourse and pretty well ensures most readers will miss it. IMO if a cite is worth finding and sharing, it’s worth ensuring your audience actually knows it’s there.
My personal habit is to use the preview box feature if Discourse can generate one and it’s not too intrusive to the flow of what I’m writing. Many webpages have the metadata to do that, others don’t.
If I can’t get a preview box or don’t want one then I use the full URL with its title which I manually bold and underline so it stands out in any Discourse theme. Like this:
Sometimes I’ll tweak the standard title that comes from the browser to clean it up a little, but that’s not usually necessary. At least on a PC selecting the url in the browser and Ctrl-C gets you the whole url, title and all. A Ctrl-V into your post, then tack “**[u]” on the front and "[/u]**" on the back and you’re done.
YMMV of course, but I think that small incremental effort pays big dividends at no real loss of flow when reading the post.
I’m going to push back on this a bit. A large percentage of the interventions I have are due to map errors. This can range from things like the wrong speed limit on the map, to completely incorrect street names and routes. These aren’t forest roads up in the mountains, but suburban and urban streets in a major metropolitan area.
Tesla may not require high definition mapping on the scale of Waymo, but unfenced robotaxis will encounter lots of problems when their map data is simply wrong.
Looks like Tesla FSD is again free for some sort of trial period. I didn’t realize this, got in our Model Y, pulled down the stalk once and was surprised to have the car take control of the wheel. I set it so one pull enables basic Autopilot and two pulls enables Autosteer. Once I got to the destination, I went into the settings and saw FSD was indeed enabled.
For those that use FSD, is there a way to change it so one pull of the stalk enables Autopilot and two enables FSD?
Also, waymo is owned by Google, and Google has quite a lot of map data.
I did see the virtues of geofencing on my ride to the airport. The car clearly knew which lane it wanted to be in, and when would be best to change lanes, well in advance of actually executing the maneuver. It felt SO much more confident and so much smoother than my friend’s Tesla.
Everyone gets a free month with the newish hands free version. Even the monthly subscribers don’t have to pay this month’s fee.
There is not a way to do the two pull FSD. That changed shortly after I got my car in March and a lot of people were very annoyed. I didn’t understand why but then I never got used to the old way.
Last week the car had a mapping error. The route showed that in order to stay on the street I was currently on, I needed to turn right on a cross street. This was a standard + shaped intersection, and not confusing in any way.
I don’t think this was just an example of FSD picking the wrong lane, so then making a wrong turn. I do think it was a mapping error because the spoken directions were something like, “turn right to continue on Main Street” when I was already on Main Street. The new FSD display where the route is de-emphasized hid the wrong turn from me, and sometimes it’s better to just make a wrong turn than an illegal lane change to avoid it.
I just did the experiment of using Google Maps to give me driving directions to the same place, and Google Maps was correct. It had me go straight across an intersection, instead of the car which wanted me to turn right to stay on the road I was already on.
It might be that Tesla’s mapping data from Google is several months out of date, so has the error while the live data does not, or that actual routing algorithm got it wrong, and the map is fine.
Back to the story. After making the wrong turn, FSD then wanted to drive half a mile and make a u-turn, instead of just making a left turn into the parking lot that originally I’d have gone straight and made a right turn into. If this had been an autonomous trip, I’d have been driving around near my destination without actually arrive for an extra few minutes.