In the 2nd quarter, we recorded one crash for every 4.41 million miles driven in which drivers were using Autopilot technology (Autosteer and active safety features). For drivers who were not using Autopilot technology (no Autosteer and active safety features), we recorded one crash for every 1.2 million miles driven. By comparison, NHTSA’s most recent data shows that in the United States there is an automobile crash every 484,000 miles.
So using their Autosteer option is about 4 times safer than just the active safety features, which is itself between 2 and 3 times safer than the US national average. And these numbers have been pretty consistent if you look at past years’ numbers.
A lot of work went into RFD early in the autonomous vehicle era, but like vehicle-to-vehicle, it seems to have died down. The biggest issues IIRC were accurately locating/triangulating on individual RFD chips at speed, the short range available to RFD, and (perhaps the biggest one) the all-or-nothing nature of the technology…essentially everyone has to commit to it, including equipping cars with big antenna sensors, and doing the actual neural net training. It’s an attractive idea, but I don’t know of any company committed to it.
CGP Grey over on YouTube did a FSD Beta test on a winding rural road in the Smokies:
A full real-time version of this and of his test in urban situations are further linked in his page. What you mention about being in the car with a learning teenager is quite on point – the observation is the system drives hewing very, very, very exactly to the letter of the law and well below the ends of the performance envelope(*). It will for instance be hesitant if there is a blind spot to one side of a stop, or if it “thinks it saw” something halfway through an intersection turn (which can be a hazard), and will every so often put the tires on the divider line.
. . .
(* The original post title was “deadliest road” and the phrase is still used in the content, but the test itself showed the obvious: that the winding and scenic but well-maintained, well-signalled road is not at all “deadly” if you drive in a sensible manner.)
I’ve now had much more experience with the FSD beta, and several updates to the beta, and my summary still stands. It’s like a teenage driver who gets easily confused and occasionally makes mistakes, but also does perfectly fine much of the time.
I haven’t watched the CGP Grey video, but I’ve seen others, and their experiences are usually about the same as mine. Some of them have a “hurr, hurr, hurr, beta software isn’t perfect” tone, which I find annoying, but when Musk makes the promises he does, is probably deserved.
The last couple of updates have improved turning behavior. It seems more confident on left turns now, in that it will go when it decides to go, instead of go-stop-go-stop and missing opportunities (just like an inexperienced driver).
Mostly, it is still Beta, with a capital \beta. Driving with FSD requires just as much or more attention as driving with it off, it’s just a somewhat different attention. I not only have to be aware of what other cars do, but also what my car does, because it may suddenly do something wrong.
I’ve never had to intervene because I thought it was going to get in a collision. I frequently intervene because it’s doing things that may annoy other drivers, such as being indecisiveness at controlled right turns. I also frequently intervene (at the same spots) to slow down for dips or humps in the road.
I really do wonder what Waymo’s realistic path to success is, and how much capital they’ve sunk into this. Here’s yet another person who takes a ride in Phoenix, where they still require a driver when weather is inclement (though the weather ends up being good for the reporter, so no human driver).
The sudden stop doesn’t bother me as much as the repeated parking in the fire lane. But even more strange is the size of their service area.
They started road testing in 2015, and opened rides to the public in 2017. And this is where they are now.
The world is aslosh with capital, and Alphabet more than most. I guess they’ll just keep spending it.
Electric vehicles have predictably taken center stage at this week’s semi-virtual CES, formerly known as the Consumer Electronics Show, in Las Vegas, but there has been an eye-catching secondary role for autonomous vehicles. In her keynote speech Wednesday, General MotorsChief Executive Officer Mary Barra said her teams are aiming to deliver a consumer-oriented driverless car “as soon as the middle of this decade.”
Chinese car maker Geely hopes to offer one even before that. Ahead of a planned minority initial public offering this year, Intel subsidiary Mobileye said Tuesday it was working with Geely’s new EV brand Zeekr to launch a car in 2024 with “Level Four” autonomous capabilities, meaning it won’t need input from a human driver within certain parameters, such as good weather or specific geographic boundaries.
Volvo plans to sell self-driving cars in California this year of our pandemic, 2022. Volvo’s rep says, “We will not require hands on the steering wheel and we will not require eyes on the road.”
There are a lot of unknowns here. They will be geofenced Level 3 cars available only in California. They don’t say to whom they will sell these cars so we don’t know whether regular retail customers will be able to purchase them. We don’t know how many they plan to sell. It seems they still need a permit of some kind from the California Department of Motor Vehicles to “test” their autonomous vehicles (shouldn’t the testing be done now?) It will be offered on a subscription basis only, meaning Volvo can shut down the subscription service and brick the self-driving features any time they choose. Still, the headline gives me hope.
It’s not exactly a controlled experiment though, since I’m guessing Autosteer tends to be used on the highways which human drivers tend to navigate much more safely than other roads. And does Autosteer just turn off when the going gets rough? If so, that would be another source of bias.
Last week at the Consumer Electronics Show, General Motors unveiled a two-passenger self-driving vehicle called InnerSpace that will supposedly be offered by “mid-decade”. (Though that seems a really big car to have only two seats.)
It is consistently getting better, but is still a long way from ready for unsupervised use.
Each Tesla FSD beta update over the last few months has had noticeable improvement in things like handling intersections.
“Autopilot” on freeways has been excellent for several years, and using it on a freeway saves me mental energy. I’m still paying attention, but the car is consistently correct and predictable in its decisions.
“Full self driving” on city streets is fine going straight, stopping at red lights, and going on green. Once turns and non-straight through intersections are involved the car is neither predictable nor correct all of the time. It takes more mental energy managing FSD (with turns) than it does to just drive the car manually.
At this rate, maybe it will be ready for unsupervised use in ideal conditions in another few years.
As it is now, making a right turn at a red light, the car can simultaneously watch the cross traffic, the state of the light, and the pedestrians at the corner. I can’t do that. So why does the car wait for a gap in traffic to close before trying to make the turn?
Definitely overhyped and overpromised, but not quite a Theranos level fraud—as in not working at all.
Tesla is “recalling” cars for not coming to a complete stop at stop signs. I put it in the scare quotes, because all they’ll do is send a software update, just like they do once a month or so anyway.
The interesting part to me is that this puts a number, 53,822, on the number of cars with FSD beta.
Tesla’s rolling, or “assertive” as they call them, stops are actually better stops than what I see most drivers do at the stop sign across from my house, so I think much of the outrage over Tesla cars breaking the law is overblown. On the other hand, the cars should be obeying the law. The computer knows when all four tires have stopped completely, so it should be possible to stop for the absolute briefest of moments if it’s safe to proceed.
Self-driving cars* should stop at stop signs - for now. If they get good enough at recognizing pedestrians and other traffic and stopping for them as necessary, perhaps some day we could give them special dispensation to roll through stop signs. When they misjudge, however, they should be strictly liable for any injuries to other road users and damage to property. Of course, self-driving cars don’t exist yet, so maybe when they do, they’ll already be good enough to ignore stop signs.
Honestly, I feel like an absolutely full stop, as in the car rocks back onto its suspension, with immediate acceleration again, would feel unnatural and dare I say robotic.
I sort of agree. Right now, if I’m sitting in the driver seat of my car, with self driving enabled, and there is an accident, it is my fault, and that is how it should be. It’s my responsibility to take control if FSD does something wrong, and my responsibility to be paying enough attention. Sort of how in flying the pilot-in-command can be blamed, even if it was the student that made a mistake.
In the future (if ever) there are real self driving cars, so the occupants are just passengers, then they shouldn’t be blamed for an accident. Sure, there can be edge cases, like the owner/occupant was negligent on maintenance, or the occupant interfered with the car, but generally if the accident is due to a software problem, then the fault should go with the entities that wrote, certified, tested, approved, etc. the software.
In my experience Tesla isn’t rolling stops like where people only slow enough to make the turn, more of getting down to almost completely not moving, and then going again. It’s about the same level of stop that I do when I’m driving, and I don’t think of myself as running stop signs, though perhaps I technically do. Or maybe the stop signs I FSD through require that much slowing for the computer to see everything, and at other signs they’re rolling much more.
Really, unless the new update forces full on prolonged driving test style-stops, the delay really shouldn’t matter to people, but of course people are assholes when they drive, and I’m sure they’d like their self driving cars to also be assholes (just other people’s self driving cars should be kind).
Well, some of the questions about liability might get answered here, where a Tesla on Autopilot ran a red light and killed two people. I’m pretty sure Tesla owners have to sign a waiver on liability, but those don’t always stand up.
Where sign = click-through, yes, you do have to agree to pay attention and take over if there are any problems. I don’t remember how deep it goes into liability.
These charges seem correct to me. In 2019, autopilot would have kept the car in the lane, and slowed or stopped for other cars in the lane. It did not stop for red lights. It did not do much to watch for cross traffic (ironically, it would brake hard for cross traffic after it had cleared the lane). It was barely smarter than a mechanical cruise control. The driver isn’t just ultimately responsible, but is directly responsible for this collision.
Tesla’s official communications have always been very clear about the limitations and responsibilities of using autopilot. The car puts up lots of warnings when autopilot is enabled as a feature. There are more warnings in the owners manual. There are also warning each time the autopilot is used. Now, where I do see liability issues is when Musk tweets or otherwise makes marketing claims that far exceeded the capabilities described in the official documentation.