I just got the long promised upgrade to Tesla FSD beta 11. Basic response is that self driving cars are still decades away.
It still makes mistakes in performing what should be easy maneuvers. For example, it came to a complete stop in a continuous-lane right turn. That is extremely common for human drivers around here, but still a mistake. It still tries to move into the wrong lane. For example, moving into a merge lane or into a right turn lane when the route proceeds straight.
The biggest changes, which are an improvement, are in the visualization.
It’s now possible to make the part of the screen that shows what the car sees much wider, so the driver can tell if the car sees cross traffic.
The most useful two UI improvements are brief explanations of what the car is doing, and the ability to give feedback. “Turning right to follow route” is an example of the kind of explanation given. When FSD is cancelled it is possible to record a short voice memo explaining why it was cancelled. “Tried to change into right turn lane when route is straight” for example.
Some of the short explanations let me know things that are probably design decisions, not mistakes on the AI’s part. For example, on a four lane highway (not freeway), it kept trying to move into the left lane because, “moving out of right most lane.”
In the latest wacky news of no-injury self-driving crashes:
https://www.reuters.com/technology/gm-self-driving-unit-cruise-recalls-300-vehicles-after-crash-2023-04-07/
I especially like the CEO’s comment about his vehicle’s mad driving skillz:
We do not expect our vehicles to run into the back of a city bus under any conditions.
Sherlock, meet “no shit”.
I’ve started seeing this series of ads: 2022 GMC SIERRA Denali (We Will Rock You): Commercial Ad TVC Iklan Anuncio Propaganda, United States - YouTube
Hands-free driving featured throughout. It just feels premature.
Going by the commercial, it looks like beefed-up cruise-control and lane-assist technology employed on mostly empty and very well-marked highways. The automatic passing feature was neat-o … but again: those lane markings better be fresh.
Eh. Fresh lane marks, fresh skid marks, fresh gouges on the Jersey barrier. It’s all the same to GM.
That’s what worries me. They’re actively marketing this as hands-free, which means slow response times when it all goes south.
Following up on my post six above here from just 2 days ago it appears Waymo really can’t buy a break. Or maybe they should have picked an easier city than San Francisco to test their prototypes.
Fog. San Francisco. Who knew?!?
While lidar helps us see objects and cameras help us understand our surroundings, radar complements both of these with its unique ability to instantaneously see and measure an object’s velocity (or lack thereof) even in tough weather conditions such as rain, fog, and snow.
And this was 3 years ago–cameras, lidar and radar.
Waymo vehicles in Scottsdale still require a driver when there’s rain, fog, or wet streets. And they started mapping Scottsdale in like 2015.
Thanks for posting this. I think my favorite part (for some levels of the term “favorite” is the spokespeople for the car companies falling all over themselves to minimize everything: the disruption, the danger, the headaches. They seem to be constitutionally incapable of saying, “Yeah, things aren’t working quite the way we’d like them to, and there’s a long way to go…”
It’s too bad that a whole industry, seemingly, is in the “you can’t trust anything they say” bucket.
“Nothing to see here, just identifying edge cases!”
Demonstrating a fundamental flaw in pure AI cars, we can’t communicate with them, we can’t motion them to back up or roll down our windows and offer them a solution to whatever predicament they’re in.
Meanwhile, a terrific thread on how Tesla’s FSD numbers are skewed by whom they grant access to.
Note that I’m not claiming that they’re rigging the numbers, but I’m agreeing that only granting FSD access to drivers with very high safety scores will definitely make the data look better than it will for the general population.
I’m not sure this is quite as big of a deal as described, because it was incredibly easy to game the safety score to get into the FSD beta program. Exactly how to do it was a frequent topic of discussion in Tesla groups when the requirement to get FSD beta was a safety score of 100.
The actual requirement was a safety score of 100 (or 98, 95, 80, etc) after 50 miles of driving (or some other round-number distance). Any driver that could play by the rules for 50 miles could get into FSD beta.
The big flaw was that the safety score could be reset by opting out of FSD beta consideration, and then opting back in. So drive a bit. If you do something that causes the score to drop, just opt out, then opt back in. Now whatever you did that caused the score to drop is forgotten, and you have another chance to try again.
The safety score, at the time I was trying to get into the FSD beta, was based almost completely on g-forces. Don’t accelerate too hard, don’t turn to hard, don’t brake too hard, and your score will be good. Be as dangerous as you want, just do it gently. Exceeding the speed limit, turning in front of oncoming traffic, etc. would not hurt the safety score. Braking hard or swerving to avoid an accident would decrease your safety score.
There were people in the groups who would reset their safety score, then go for 50 mile drive in the evening on lightly traveled freeways to qualify for FSD. Once in, you can drive as crazy as you want.
Hmm. My score was almost entirely driven by collision warning system activations. I was at 99 just by virtue of my normal driving habits. I got one (false) collision warning and my score dropped to 93. I didn’t try to game it or anything, so I just ended up waiting a month for it to drop out of the tracking history.
I guess I’m missing something. I’m quite capable of driving violently but safely. And of driving serenely but quite hazardously.
But that’s not the source of my confusion.
If I’m opting in to letting HAL drive, how does my driving matter? The whole and entire point of auto-driving is to let HAL do it, not me. How does my driving styler & skill matter once HAL is in charge? Color me utterly baffled by this.
Unless the real issue is that HAL drives timidly and blandly and they only want timid bland drivers participating in FSD so they won’t down-vote HAL’s driving as too bland / timid for their spicy / aggressive tastes.
I smell a rat. A big T-shaped rat.
The logic is that FSD requires human supervision, so that somebody who is a conscientious and safe driver will also be a conscientious and safe minder. You can dispute that being true, but I don’t think it is baffling, even if perhaps wrong.
And yeah, as said “safe” was pretty much defined as smooth, no forward collision alerts, and maybe a couple of other things, but speed as much as you want, and feel free to ignore traffic control devices, just do it nice and steady. It was better for your safety score to run a new red light than to brake hard to stop in time.
I was being a bit facetious. As you accurately say, Tesla is recruiting people to diligently monitor something they call “full self-driving”.
My point, obliquely made, was that if it was “full self driving” it would have no need of a diligent monitor, or any sort of monitor at all. This is just a different way for me to tweak their nose yet again for mis-selling, or at least mis-labeling, their flagship capability.