Self driving cars are still decades away

How can a car be “remotely controlled” and “self-driving”?

Starsky Robotics CEO Says Remote Control Trucks are the Future

Our trucks are autonomous on a highway and remote controlled by drivers for the first and last mile. Remote drivers are comfortably sitting at the office and help complete off-highway operations and safely navigate complex, context-heavy traffic environments such as truck yards.

“In terms of the functionality – basic functionality aspirationally end of this year but reliable enough that you do not need to pay attention in our opinion by the end of next year”.

Elon Musk, October 2019.

“While it’s going to be tight, it still does appear that we will be at least in limited – in early access release of a feature-complete Full Self-Driving feature this year… so feature-complete means it’s most likely able to do that without intervention, without human intervention, but it would still be supervised… requiring supervision and intervention at times. That’s feature complete… and it doesn’t mean like every scenario, everywhere on earth, including ever corner case, it just means most of the time.”

Elon Musk, October 2019.

There was a Nova on self-driving cars on the other night. Watched it last night. Fairly interesting. Although at times it verged on seeming like one of those classic Blood On The Highway scare films from HS driver’s ed.

The common issues: computers are better than humans … most of the time. So accidents will be less common. But there’s going to be accidents a human easily would have avoided that the computer will miss. And that’s what a lot of people will focus on.

The continuous “up time” that an auto auto needs to run to avoid causing an accident is astonishing. And yet they rely on very complex computers that can’t even avoid (their kind of) crashing very long. What happens when the car’s OS crashes or gets hacked? The gulf between a 6 hour test drive and driving regularly for 10 years in all kinds of conditions is vast.

With “assisted” driving the big, unsolvable, issue is keeping the human involved. People are going to use their phone, sleep, eat, etc. So the car is doing all the driving when it’s not set up for that.

Many of the people explicitly said we are decades away. But companies are rolling out stuff at a faster rate so Bad Things are going to happen.

It’s good people are working on this, but I think it’s overall non-obtainable unless you have:

A: Sufficiently functional AI, smart enough that it’s almost self-aware, and sufficient sensor input for that AI to let that it perceive the world at least as well as we do.

B: A road system that fully supports self-driving vehicles, and corresponding systems in the vehicles to utilize that road system. Some sort of consistent guides/boundaries that the self-driving vehicles can use. Like railroad tracks, you could probably make self-driving trains more easily than self-driving cars.

Otherwise you aren’t going to brute-force program self-driving vehicles to navigate all the complex road situations that humans do. Yeah humans are ignorant, they are distracted, but they also have superior visual processing and reasoning that no computers have.

A few months ago, I saw a news report on HBO about GM’s self-driving technology. It has cameras on the driver and if the driver nods off or looks away (say at the passenger) it first makes warning noises and then if the driver does not return their attention to the road, it will bring the car to a stop.

I’ve been on more self-driving trains than I can count. You may have too. Many airport trains are completely self-driving and have been for years.

Musk has always been good conversation fodder.

There aren’t many industrialists who can convince people to put money down and then wait patiently for the privilege of praising a poorly built car. And just in case that wasn’t enough, kill off a bunch of them who believed the words “Autopilot” meant the car could drive itself. Because changing the name of the option would require… what?

It’s difficult for people to understand the complexity of a self driving car. That’s made worse by a desire to believe in promises of technology that is “just around the corner”.

It’s not just around the corner. It’s extremely complex. Information is fed into a computer in the form of visual pixels and telemetric data. They have to be combined in a way that allows lines of code to match/identify objects correctly, the movement of those objects, the intended road the vehicle is operating on, weather conditions etc… There is an infinite combination of data points creating an infinite number of solutions to solve. All this while car is moving. the faster the car is traveling the faster the solutions have to be calculated. If any of the input is delayed or misinterpreted then the solution is based on flawed data. All that assumes a solution exists in the program.

We can barely get a robot to walk under controlled conditions yet we expect a car to interact with an infinite number of situations at 70 mph with enough accuracy to prevent fatal accidents.

I’m not too worried about that. These computers aren’t running on Windows 95, so the entire computer crashing is going to be an extremely rare event. The programs its running encountering bugs and restarting will probably be more common.
The most recent report by Tesla is that cars traveling on autopilot are much safer than those not:

It is possible to pick holes in that: New cars are less likely to get in accidents than old cars, and all Teslas are new; autopilot is primarily used on limited access highways, and those are safer than other streets; etc. However, 1 accident every 4,300k miles versus 1 accident every 500k miles is a huge difference. Limited access highways have about half the accidents of other streets, not 8 times fewer. Autopilot is clearly a huge improvement in safety.
I am concerned about the fear mongering around self driving, because that will cost lives. If, for example, per million miles self driving cars have 1/10 the accidents of human driven cars, then the best thing to do to prevent accidents is to get rid of human drivers, not ban self driving until that number is down to 0.00 accidents per million miles.
I finally figured out how to properly describe autopilot to people who have never used it: I’m one of those annoying passengers that mimes using the brake when the driver isn’t slowing as soon as I expect. I’m paying attention to the road even when I’m not driving. It is not a far stretch from that to what I’m doing when the car is on autopilot. I use autopilot everyday, and I’ve only had to take control to prevent autopilot actions a few times. Frankly, I feel safer on autopilot than with several drivers I know.
Autopilot seems very good at keeping control of the car, holding speed, following the lane, keeping the appropriate distance from other cars, and navigation. What it is bad at is making decisions about what lane to be in for best traffic flow, and predicting the actions of other drivers (is that car going to move over, is it going to stay in its lane, etc.). Autopilot is that teen driver who has good car control, perception, and reaction skills, but isn’t smooth, is too timid to change lanes into a small gap, and can’t gauge the flow to merge into heavy traffic.

For all of my praise of autopilot, I am NOT willing to gamble $3000 that full self driving will be available anytime soon. I just don’t see door to door self driving, even monitored, as being usable in the next few years. If in a few years it is that good, then I’ll happily spend the $5-6000 it will cost me then.

It’s possible to pick even more holes in that, since Tesla has the luxury, like all self driving systems doing real world testing, of quitting if the going gets tough. Cruising for 1000 miles and then forcing the driver to take over in a construction zone, followed immediately by an accident, should not count as 1000 accident free miles. It’s like, I had a junior dev at work who we told only to take tasks off the board if he knew how to do them. After 3 months of taking all the easy tasks and leaving all the hard tasks for other devs to struggle with, he had the gall to ask for a promotion based on his superior output. Suffice it to say we changed our rules after that.

“60% of the time, it works every time.”

Yes, autopilot will disable in conditions it is not able to handle. Is that autopilots fault though? The other night I was driving in a snow storm. At some point autopilot gave up and told me to take control because it could not see the road lines. I could not see the road lines either. If I got in an accident a bit later, is that autopilot’s fault? Perhaps it was autopilots way of saying I should not have been driving in those conditions.
The real question, and I don’t have enough information to answer it conclusively, is does factoring in all of those things make autopilot 4 times safer than a human driver, 2 times, 0 times? Autopilot is probably still safer than a human driver when restricting the dataset to only human driving in conditions that autopilot would have been able to handle.

All of the complaining about rare and edge cases is fine. Those need to be handled eventually, but don’t let it distract from the benefits that are available immediately. Several times per month I see rear ending accidents on my freeway commute. Autopilot will not have those. Preventing them doesn’t even require autopilot, just traffic aware cruise control, which lots of cars have.

“Fault” is a human concept. I think adaptive cruise control and lane keep assist are great safety technologies. I’m sure the data supports that. A car with a human driver plus those systems is probably safer as a car without.

Using that data to then say that a fully autonomous car will be safer than a human driver is a leap of logic. For one, and most importantly, fully autonomous cars don’t exist, so we can’t get data from them. For two, it may very well be that we’ve hit the limit of what current autonomous technology can do safer than a human. And it might also be true that those things amount to a lot more miles, but then maybe accidents per mile is not the right metric.

I kinda love autopilot. I use it whenever I am on the freeway pretty much. Stop and go traffic is the best application because it eliminates mental and physical fatigue. But pretty much all time it’s good. It’s not fully autonomous by any stretch and I could see many situations where I take over, but for the most part, it’s a very very useful tool for thousands of miles driven so far.

Re: using statistics to compare the safety of a human driver vs an automated vehicle.

If looking at number of serious accidents per million miles driven, human driver vs automated vehicle, I’m only interested in the human driver statistic where the human driver was

sober, and
obeying the road laws

(So this statistic would include the serious injury or death of a sober & law abiding driver who is hit by a speeding drunk, but not the speeding drunk if they are also seriously injured).

If I’m looking at the effectiveness of an automated vehicle, I’m comparing it to the accident rates of vehicles equipped with collision avoidance systems, which are commonplace now on new vehicles, and which are already substantially reducing crashes. They’re only on about 15% of road vehicles right now, but get that number closer to 100% and autonomous cars aren’t going to have nearly as much, um, impact on accident rates.

This statistical comparison you want to draw is irrelevant to me because many drivers are drunk and breaking the law. I care that self-driving cars are safer than the average driver. But your view raises an interesting hpyothetical - what if self-driving cars are worse than the best drivers but better than the worst? We could save a lot of lives if we got drunks and other dangerous drivers into self-driving cars even if those cars are not as good as the best drivers.

This is also a good point. I don’t know how good today’s systems are. I suspect that as they get better, people may risk compensate and eliminate much of the benefit of those safety systems. For example, if your car will keep itself in its lane and stop automatically in an emergency, you can look away from the road for a bit longer when composing that text message that just can’t wait. Today’s Teslas are a perfect example. No one slept behind the wheel on purpose before Tesla offered its semi-autonomous systems. Now Tesla drivers do it with some regularity.

I think we’re saying the same or similar thing.

In terms of whether I personally would feel that an automated vehicle is safer for me than taking the wheel myself…

I am not interested to know whether an automated vehicle is safer than the “average” driver. I only want to know if the automated vehicle is safer than a sober & law abiding “good” driver.

Daimler (parent company of Mercedes-Benz), has had a similar realisation:

“The CEO of the German automaker said during an investor conference that self-driving cars are more challenging than once thought.”

“The automaker’s CEO, Ola Kallenius, reportedly said during an investor conference that there’s been a “reality check” surrounding robotaxi fleets..”

“Daimler’s engineering team has found it more challenging than expected to develop self-driving cars”.