How much safer are Waymo cars?

Exactly. Expanding on that, and this should be very obvious, a single anecdote is not necessarily extensible to the overall performance. Self driving cars will get into tragic accidents because they are cars and all cars get into tragic accidents. If self driving cars get into similar accidents at the same or a lesser rate than regular driving, there is no reason to ban them.

Are you saying that the Waymo was playing with its phone or radio station? Or, that it couldn’t see the cat because of fog or snow? Are you saying it was distracted?

Or, maybe the cat just ran out under its tires and it didn’t have time to brake.

I remember this cat story when it first came out, and I was trying to understand why so many people found it upsetting. I may have figured it out:

We expect people to mess up sometimes. “To err is human”, etc. We expect machines to work perfectly 100% of the time. When our expectations aren’t met, it’s upsetting.

“Almost”

Your learning curve is indistinguishable from a flat line.

Even the most gullible fanboi should see by now that is never gonna happen.

Specifically what do you think is never going to happen?

Moderator Warning

This is a clear case of attacking the poster and not the post. Since you have received several warnings for exactly this in the past, you should clearly know by now that personal attacks against other users is forbidden outside of the Pit.

This is an official warning for a personal attack outside of the Pit.

Since warnings don’t seem to be getting the message through to you that you can’t do this, we will also be reviewing your posting privileges here.

I cannot call someone still waiting for Tesla “full self driving” a gullible fanboi?

LOL.

Specifically what do you think is never going to happen?

Tesla has publicly pivoted to xAI and Optimus. The cars are going to wither and advancement will slow.

I suspect that Tesla management (not musk) are leaning towards their software and sensor suite becoming an OEM part installed on other companies’ cars.

Their actual car manufacturing may be wound down, but the money, a la Qualcomm, is in licensing their tech.

And yes, they (Tesla and FSD) have a long way to go to be truly “fully” at self-driving. But IMO there’s no reason to think they’re going to walk away from the progress they have.


FTR, I am far from a Tesla fanboi. And I despise musk. But I do think self-driving is a fully doable technology and will be part of everyone’s future. Whether we like that or not. And right now Tesla the company is in a decent position to play a large role in that future.

Really good point; much better margins and they could structure it to have direct access to the customer and a monthly service fee (like satellite radio).

The same applies to the battery tech and charging network – although to a lesser extent.

Why deal with the hassle of building cars?

No need to suspect it, that is the stated goal of Musk and he’s been trying for five years. Unfortunately for him, all the other manufacturers have shown no interest.

That obviously changes if he can figure out a way to make it work for real, so it very well might happen someday.

But…

I’m not so confident with this statement. Maybe they get there, but things aren’t going in the right direction. Based on community tracking, the latest version of FSD (14.2) shows a drastic drop in miles to critical disengagement - how often the driver must take control of the car to avoid a problem. For city driving, this was 4,182 miles in v14.1, and dropped to 825 with v14.2. Overall miles showed a similar drop. For context, Waymo didn’t remove the safety driver until they hit 30,000 miles.

That is such a major regression that it calls into serious question their own pre-release testing of 14.2. Or their seriousness about only releasing winners, not just anything on some fixed cadence or for marketing or stock price manipulation reasons.

I don’t have a Tesla, so most of what I learn about them comes from SDMB and the occasional cite from here to wherever. Or from talking to Uber drivers with Teslas.

14.2 was the first one that changed the way decision making is done. So called “sentience”. I haven’t personally witnessed a decline in performance but those statistics don’t surprise me either. 14.3 is supposed to be the breakthrough “full sentience” and we are months late and counting from the original release date claim.

I have a Tesla, I bought FSD before it was available to use. (So old, I have the older computer which does not yet have the latest software version). A lot of my nterventions are for longer-term thinking - I prefer an alternate route, not calculated (like GPS units do) solely on posted speed limits; or, I know that next block there’s always someone parked in front of that restaurant, so don’t get into the curb lane, or get into the lane now, don’t wait until a block before to try to merge into heavy traffic. It’s not “you tried to get me into an accident” but these less serious reasons for intervention happen far more in the city -epsecially per mile - than on the highway. If you’re a passenger - Waymo or a regular taxi - that’s the not the passenger’s issue to change. And I’ve had a few “you take over - NOW!” issues mostly due to messy roads or low, bright sun obscuring cameras.

There was a bit demonstrated by an anti-Tesla type on YouTube where they showed a Tesla FSD ignoring the pop-out stop sign on a schoolbus and running over a small mannequin yanked out in front of it from between parked cars. You could visibly see the car jamming on the brakes, but like any car, no time to stop. The disturbing thing to me is that once the mannequin was lying on the road below the bumper, the car started to resume driving. First, Tesla has fixed the school bus stop sign issue. (The trouble with these derogatory demos is not specifying the version, as each version improves some things). Second, that the FSD “forgot” the mannequin was there once it was not in view of the camera on the windshield. Newer car versions have added a bumper camera, presumably for this reason.

Computer driving is like autocorrect - very good when it’s routine, but has its “interesting” moments.

I can confirm that the school bus thing used to be an issue but it has been fixed. More recently it is able to recognize sirens from fire trucks, ambulances and cops and pull over.

The other issue with those scare videos is that it might have taken fifty takes to stage result that they wanted.

Verry interesting. Thanks for the details.

ISTM something as major as changing the high level decision-making methodology ought to warrant a full new version number, e.g. 15.0. Said another way, if Marketing is trotting out new buzzwords to describe the new version, a new major version number is for sure the right thing to do.

Which numbering would also emphasize to users that they should expect significant differences in obvious features (behaviors in an AI), as well as unexpected changes in corner case behaviors, both good and bad.

At the same time, you’d want (hope?) that shiny new “sentient” v15.0 was better overall than the final v14.x it’s replacing.

I think that the earlier versions of 14 had the sentience thing in the background but it wasn’t being used yet or something like that. That may be the logic.

The community tracker I linked above distinguishes between “critical disengagements” and any disengagement, and those miles were for critical disengagements only. For comparison, 14.2 had 99.1% of drives with no critical engagements, and 71.6% without any disengagement, so you are right that the vast majority of disengagements are for non-critical things like your examples.

Here’s how they’re breaking down the data. I don’t know where navigation issues factor in – non-critical disengagement? Not reported at all?

Disengagement:

  • Turning Steering Wheel: Turning required due to crossing lane unexpectedly, accident avoidance, or other required maneuver to avoid unsafe driving.
  • Braking: Braking required due to late deceleration, accident avoidance, or moving forward incorrectly / unsafely.

Categories of Disengagements:

  • Critical: Safety Issue (Avoid accident, taking red light/stop sign, wrong side of the road, unsafe action).
  • Non-Critical: Non-Safety Issue (Wrong lane, driver courtesy, merge issue)

Intervention: Non-Disengagement (Accelerator tap, scroll wheel, turn signal)

I mean, an innocent creature died, so that’s going to be baseline upsetting.

Now why people got so upset? I think you’re basically right, but there is more nuance. The type of mistake matters. One that causes an irreparable harm or injury is much less tolerable. It matters that this is replacing humans. It matters that the reason why a human might have this mistake is due to behaviors considered morally wrong, that they can be blamed for.

In short, if human drivers are to be replaced, we expect the AI to be safer in all circumstances. And I don’t mean “statistically safer” either. And if it fails in that regard, we want someone to blame.

The social media tag was about “justice” for Kitcat, after all. They assume the car did something wrong that an alert, properly driving human would not have.