Well… maybe by “reduce” we’re talking the same thing, but you 1) don’t tailgate, 2) keep your eyes up and looking past the car in front of you, so you can get more information about what might cause them to slam on their brakes, 3) keep a mental map of where all the cars are on the left and right as far ahead and behind in both directions, so that when you check to see if the space is clear you’re just confirming what you already know, and 4) do it as quickly as possible so that you can get your eyes back to front. Can I do all of that better than a computer? Obviously not. But this is the level of awareness I’ve tried to instill in my kids. Changing lanes should be a 0 risk maneuver.
But yes, I agree with your general sentiment that this comes down to personal preference. At this point in my life, I don’t want to learn how to supervise my car. I see how it’s a learnable skill, and more power to anyone who wants to learn it.
I don’t understand then. Do you think this is just a vi versus emacs debate?
I think it’s more, “These companies are beta testing on public roads, and that makes me very nervous. I’m concerned that they’re going to release this crap before it’s ready, because they’ve been lying about how awesome it is and how close they are for a decade. They have no credibility.”
You seem to discount the lying completely. “Meh, Elon lies, everyone knows it.” No, Elon is lying to manipulate public opinion so that he can put unsafe cars on the road to line his own pockets.
Yes, I try to do all those things as well. But my mental map is not perfect, and sometimes traffic is dense enough that I can’t maintain the following distance I’d prefer, and so on.
And, needless to say, we’re all meat brains that make mistakes. Pilots with thousands of hours of experience will sometimes just do… exactly the opposite action that they should have. It’s just a defect we all live with.
No maneuver is 0 risk, and it’s a little concerning that you think it could be. There is always the possibility of error and we should always be looking at relative risk.
The rest of your post veers into unhinged rant territory, so I’ll ignore it. This isn’t a place for defending capitalism against the collectivist hordes.
I have driven a Tesla with FSD turned on, multiple times. I hated that, when it did stupid shit, it didn’t tell me why. When it slams on the brakes (which is what happened to me), it should show me what the cameras saw when prompted. Due to that, I don’t have nearly enough confidence in it to zone out in any way. If I can’t zone out, I don’t see the point. I do use things like cruise control on open highways, but more for other drivers than for myself. My motto for driving is always “be predictable”, and cruise control helps with that. Slamming on the brakes in the middle of the road with little traffic does not.
Again, I’d pay dearly for actual FSD. I’d happily spend 6 figures on a working Volvo 360C. If driverless vehicles were the only option outside of a track, I’d be elated. I’m not against the technology, and quite frankly someone (not you) who thinks Wired Magazine is anti-technology probably isn’t someone worth having these conversations with.
I’m a subscriber, and they are way more tech friendly than they probably should be. They originally plugged the shit out of Theranos, for example, although they have subsequently had articles covering the fraud as well. Just based on your other posts, I’m assuming they said something bad about Elon at some point, or do you have actual proof of their anti-technology bias?
Just for giggles, a recent piece had the tagline:
Tech critics are more sophisticated than ever. They’re still wrong.
I mentioned both. Futurism is 100% garbage, while Wired still has a few decent articles here and there but has a pretty low hit rate.
You can just look at the articles and see for yourself. For one thing, it’s about 30% generic left-wing politics. Wired’s always been a little political but with a techno-libertarian slant. Their current top story is:
It looks like the reporter’s job was to find the absolute worst thing associated with AI today and report on that.
Their only EV article is:
You can conclude whatever you want to. But I’ve been an occasional reader since the mid-90s and there’s been a fairly dramatic difference.
It doesn’t work for me, therefore it’s not useful.
is more like this
It doesn’t work for me, and I am typical of everyone, therefore it’s not useful to anyone.
And yes, you’re right that most folks will grudgingly admit they’re not a universal example for some things they like, despite steadfastly believing they’re universal examples for things they don’t. Such is human cognition in all it’s magnificently terrible glory.
I just noticed your avatar. Which just might relate to neural nets, etc. If so, that would certainly color your POV. And perhaps give you a deeper insight into the problem than most of us have.
In many ways this does have the feeling of a tech that will always be out of reach. Not so much the solid brick wall of non-progress that is e.g. fusion power, but one of those things where the farther we get into it while showing real demonstrable progress, the farther we now can see we need to go.
And to be fair, that does go both ways. I’m actually very interested in steronz’s report that he can’t rewire his brain into “supervisory mode.” That’s something that I hadn’t completely anticipated because it doesn’t apply to me very well.
But again, the argument doesn’t work in both directions. If something doesn’t work for someone–so be it. Most things don’t work for everyone. Things can be useful even if they only apply to a fraction of the population.
The relevant issue to them being tested on public roads is if they have a higher accident rate than human drivers. And so far the data indicates self-driving cars are safer–or at least no more dangerous.
Ok, but the discussion here isn’t “You’re wrong to be distrustful, they’re super safe,” it’s “if you don’t like FSD don’t buy it.”
Like, I get it, people hate on Apple because the fanboys are annoying, and that sort of recreational hate is equally annoying because if you don’t like iphones don’t buy one. But the recent discussion here has tried to portray the current concerns about Tesla and Waymo as no different.
Yes, some people have voiced why FSD doesn’t appeal to them, but the complaints by and large have been unrelated to personal preference.
They don’t seem to be related to anything. Fine, distrust Tesla’s FSD safety data. But where’s the non-anecdotal evidence that it’s actually unsafe?
There was the report that the NHTSA was looking into the Robotaxi launch. That’s not yet evidence that there’s a problem, but it does mean that in a few months we’ll see a report of some kind. They might report that the Robotaxi is unsafe and should be taken off the streets immediately (if true, this should happen quickly). They might find some minor issues and recommend fixes–which would probably be classified as a recall but are actually just minor software tweaks (they’ve done this before). Or they might find nothing of note whatsoever.
If the NHTSA doesn’t find any truly serious issues, will you “update your priors” about the level of safety for FSD software?
I’m always swayed by new evidence. But also, we keep adjusting the goal posts. That’s not unreasonable – I look back to my OP here and we have a completely different terminology we’re using now, and a completely different approach to self driving. What a time to be alive, these last few years.
So the Robotaxi launch is not L4, we know now, which is something we didn’t know 3 weeks ago. We’ve had to adjust and move the goalposts. And that’s fine too. I think the Robotaxi is a publicity stunt, and if the NHTSA says that as far as publicity stunts go, it was safe, then ok. It doesn’t tell us anything about if they can take the steering wheel out of customer cars anytime soon.
Eta: And that’s probably going to be the next few years, right? Geofences will get bigger, maybe it disengages in fewer weather conditions, etc. And every step they take I’ll be distrustful.
Says who? The existence of an in-car safety monitor doesn’t mean it isn’t L4. Waymo had safety drivers for years after their initial launch. And they still need remote control every so often.
I don’t expect it to take nearly that long for Tesla. After all, we just saw a long drive without one. Maybe a few months, still, but still this year.
If Tesla quickly increases the number of cars, opens it to the public, increases the geofence range, etc., will you admit that it isn’t a stunt and instead just a small initial release? I’ll certainly acknowledge that it was one if they don’t make meaningful improvements in a reasonable timeframe. I’m not going to make exact predictions but I don’t think it’ll take a rocket scientist to figure out if they’ve expanded beyond 10 cars, 20 riders, and half the Waymo geofence.
BTW, Tesla released a real-time version of their self-delivery video:
All very relaxing looking. And also something my relatively ancient version of FSD on HW3 could do. There’s no reason they needed to specially train on this route. I’m sure they just waited for an order that fit their requirements (fairly easy, mixed roads, decent length, etc.).
Show me where in this chart it says there must not be a safety monitor:
It says the features will not require you to take over driving. Being in the passenger seat, the safety monitor cannot take over driving. The car can stop and refuse to go further–in which case possibly the safety monitor gets out of the car, changes seats, and starts driving. But that is inherent to L4 vs. L5, since as it also says, it “will not operate unless all required conditions are met.” Conditions like clear weather and being inside the geofence.
Now, as it happens, I actually did argue years ago that Waymo wasn’t quite L4 for just this reason. But it’s not something in the standard. And, well, if Waymo is considered L4 then so is the current Robotaxi.
Yes, it’s not in the standard. Nobody at SAE had the foresight to include “hit an emergency stop button” in their definition of “take over”, but to me it clearly meets the intent of “take over.” It’s an immediate emergency intervention. And they had to do it on day 1.
And Waymo claims they can’t react that fast. Their engineers can only respond to prompts from the car. That’s the difference. I also don’t care what Waymo is classed at.