Re "self driving cars" people talk as if this is a few years away. This seems nuts to me.

GPS doesn’t do that, it’s inaccurate map data. And that gets better all of the time, especially if the map data comes from outside. The navigation that screws up and tells someone to drive into a river is likely on a DVD or hard drive that is years out of date.

Smart phones are only ten years old. Youtube is too. EV cars have gone from a punchline to a common sight in that time too. Most of the tech we use today didn’t exist in this form only a short while ago.

Technology is moving faster than you realise, and a five year prediction is an aeon in tech terms. I’m surprised too, but I can believe it.

Cool concept! But instead of averaging many people, you could get celebrities to record their driving of a particular road or district.

  • The new Maibatsu Thunder: the only car with the driving experience of Paris Hilton.
    (Applies only to Beverly Hills. Other areas pre-driven by lesser celebrities. Midwest pre-driven by nobodies.)

  • Come live in Valley View, where all the roads were pre-driven by Tom Cruise.
    (Applies only to cars using the Bing™ Driving Engine. Please respect the stop sign on 3rd Avenue, he didn’t notice it when he pre-drove.)

Tell that to my Uber drivers. :stuck_out_tongue:

IMO this is the crux and *may *in fact turn out to be an insurmountable obstacle.

In my industry it’s well-proven that people are lousy monitors of 99.99%-reliable systems. Undesired scenarios occur despite using two people who are both professionals paid to be there monitoring full-time. The good news in aviation is that most of the time these errors don’t become critical until many seconds or even many minutes have elapsed. So even sloppy human monitors catch the errors before they become irrecoverable.

In the few phases of flight where errors become irrecoverable in just a few seconds or less we, by formal procedure, do the work 100% manually, or do it automatically with both human monitors perched on the figurative edges of their seats while paying very, very close deliberately managed attention to absolutely everything that’s relevant and absolutely nothing else.
As higher level strategic automation was first introduced into aviation there was an expectation that safety would be immediately improved. At first, to everyone’s surprise, it didn’t. Instead it got worse. A whole new class of “failed to detect confused / failing automation” errors occurred. With sometimes fatal outcomes. Over the space of a few years we learned as an industry how to operate differently and how to see the signs of growing confusion in the automation itself or in the human crew’s understanding of the automation’s “state of mind”. With the end result that we now are realizing the safety benefits of that automation albeit at the cost of having to adopt the new and different techniques described farther above.
As applied to cars:

  1. Monitoring a 99.99% reliable automated car is a different kind of driving vs. the driving we’re all used to. If nobody is teaching people how to do new-style driving, most drivers will suck at it worse than they suck at manual driving now. And doing new-style driving takes a level of self-discipline that the general populace shows no evidence of having.

  2. When something goes amiss in an automated car, the human driver’s window of opportunity to save the situation will be very brief, a couple to a few seconds. Not like most of aviation at all.

  3. Drivers won’t be able to be “just another passenger” until automated cars are (WAG) 99.99999% reliable to do the right thing.

  4. We won’t go directly from today’s partly automated cars directly to 99.99999% reliable without passing through 99.9% & 99.99% reliability. During which time drivers’ collective desires to be just passengers watching movies instead of being edge-of-seat active monitors will be untenable.
    The sum of the above, IMO, is that expecting the general driver populace to catch most miscues by their early model automated cars is unrealistic. There will be crashes, and lots of them. Almost certainly fewer than happen today from purely manual cars, but the novelty and sensationalism value of these crashes will be huge. Most of the crashes, if they’d been done by a human driver, will look like very dumb amateur or drunk driver errors.

The public may well recoil in horror, demanding an end to the transition to safer fully automated cars. I hope this won’t happen. But I do predict everything above this one paragraph will come to pass. How the public, the media, and the government will react is the big unknown to me.

The biggest problem (IMO) with realisation of self-driving cars is not a technical one - it’s that people will expect them to be perfect - and if they fall in any way short of perfection, they’ll be perceived as ‘too dangerous’.

It’s silly though. They don’t need to be perfect; they just need to be as good and safe as human-driven cars, or better.

The crashes are already happening, and there’s a little novelty and sensationalism, but nothing huge. The public, the media and the government have all taken the many fender benders with the google car (almost all of them with human drivers at fault) and at least one fatality with the Tesla in stride.

The comparison to aviation I think is flawed. For one thing state of the art for automatic systems is now light years from what it was when automatic systems were introduced there, and for another the current engineers can use the lessons learned with those systems.

Yeah, this is exactly the issue I was trying to highlight in the previous post; a car that is “semi-automated” like the current Tesla Autopilot system may actually prove to be more dangerous because by relieving the apparent workload from routine driving, they also remove the impetus for the driver to pay attention in a contingency case, or else have features designed specifically to provoke driver response. As drivers become more dependent on such expert systems they become effectively less skilled at being drivers; hence the need for both genuinely autonomous piloting systems which can cope with every contingency, and a set of industry-wide standards for testing how an autonomous vehicle responds to various hazard conditions and scenarios.

There is one way in which such vehicles will be automatically better; unlike people, they won’t be subject to fatigue, emotional stress, or anger, and can be designed with failsafes that prevent them from entering or operating in hazardous conditions. They could also potentially be networked to all other vehicles around them, coordinating traffic flows to minimize traffic jams and the potential for crashes in inclement conditions. The advantages of highly networked autonomous vehicles–once certain technical thresholds are achieved–are so significant that it is practically inevitable that they won’t be adopted. But those technical (and especially regulatory) thresholds are not going to be resolved in the next five years.

Stranger

There are indeed many situations that a computer-driven car can’t handle. What people tend to gloss over, though, is the fact that most of those are also situations that human drivers can’t handle… and yet we regularly attempt to handle them anyway, and get into many accidents as a result. If you’re in such a white-out blizzard that the car’s sensors can’t see where the road is, then a computer-driven car will pull over and wait for the weather to clear. This is not a bug: It’s exactly what a human driver should do, too, in the same situation, except that many of them don’t. The fact that the computer has better judgement than the human is not a point against the computer.

Exactly. Most of the situations which result in an accident by a human driver are completely avoidable, and an autopilot could avoid them simply by not being distracted, intoxicated, overly aggressive, or obtuse to hazardous conditions. Just that condition alone would essentially eliminate accidents due to driver inattention and hazardous road conditions, leaving only unanticipated collision (due to animals or pedestrians entering the roadway), loss of control by another driver/vehicle, the rare catastrophic mechanical failure, and of course, failures by the autopilot system to anticipate reasonable hazards. The last cannot be dismissed, especially given some of the failures that have been observed in software controlled accelerators, but those errors can be discovered and diagnosed by rigorous testing and reporting of failures, which again harkens for industry and/or regulatory standards for such testing just as exists in the aviation industry. The costs of such testing and regular updating of such standards would be more than offset by the massive reduction in costs associated with automobile accidents and injuries, as well as presumed reductions in insurance premiums.

Stranger

If part of the “automated car” system means that I have to sit at the controls monitoring the system to see if the automation is failing in some way, then why would I want to buy one? Wouldn’t it be easier just to drive the car myself?

IOW, the benefit to the consumer of automation means I can sit back and do work, sleep, drink whiskey, or whatever while the machine shuffles me around. If I am on duty, I would just rather drive.

Well, exactly. The current systems, like Tesla’s Autopilot or Infiniti’s Driver Assistance Technology, the systems “assist” the driver by removing the need for vigilance, which results in a tendency for inattentiveness by the driver which has to be mitigated by deliberate alerts to maintain driver attentiveness, which may actually increase driver workload, although not in a constructive fashion. The result is that the driver may spend more effert responding to alerts than actually being attentive to real hazards, just as “social media” actually distracts people from interacting in the real world.

Stranger

Of course, the ultimate goal is to have a fully automated system do that we can do what you’re suggesting. But even just having a system where we’re required to monitor the driving is easier than driving the car myself. Think of cruise control systems – in long distance highway cruising, having cruise control makes a huge difference in relieving driver fatigue, even though we’re still required to monitor road traffic. Now lets add a radar sensor to the cruise control system so that the car automatically slows down when it senses cars ahead (this adaptive cruise control system is already available in lots of cars), and you’ve the task of driving even easier, even though we’re still monitoring the system.

The point is that computers will inevitably lead to safer driving than having humans fully in control. People often complain about how unsafe and unready technology is for automated cars, but they never bring up the fact that humans make mistakes all the time. In the US alone around 90 people die in car accidents DAILY (around 30,000 fatal crashes annually), with around 2 million people injured in car accidents each year. Lots, if not most, of these accidents are caused by driver error. Having computers take over more control of driving can only lead to fewer people dying (and I say this as a car enthusiast who enjoys manuallyshifting my own gears).

If the automation goes to mandatory, how will the country person on rural dirt roads be able to go places necessary, how will they afford all this great tech?
Will they be able to go to town, or afford to go to town because they can not afford the Google car?
We will no longer have people who can manually drive. Will they be able/allowed to rent cars or get driver licenses in other countries without passing a test? Time & cost anyone?
Tech is close IMO.
Actual useful implementation on a large scale is much farther out IMO.
Improvement of 51% of the countries citizens quality of life is much farther out than that. IMO.
Increase the division of our two class society, happening very quick & increasing the division very much IMO.

Not all things that can be done with tech should be done without much forethought. IMO

Gus, when exactly are you expecting this to become mandatory?

It will only become mandatory when driving your own car is seen as the equivalent of driving drunk today. The fleet of cars on the road today isn’t going to be scrapped and replaced by self-driving cars, rather some people will chose to buy a new style car when their old vehicle gets replaced, or there might be advancements in fleet taxis that make owning your own car less financially smart.

By the time half of all cars on the road are self-driving cars, they’ll be able to handle country dirt roads. The roads themselves are just a mapping problem. Rural road conditions like cows and deer and washouts and potholes and washboard road and dust and mud and downed trees will actually be a lot easier for a self-driving car to handle. Humans are terrible at avoiding hazards like that.

And even when we go to cars that are 100% driverless, there will still be some method of guiding a car in some manual fashion, even if it means getting out your phone and steering it with a touchscreen app.

And in the future world of the future, a 100% self-driving car is eventually going to be cheaper than a manual car.

It’ll most likely become widespread not because it’s mandatory, but because the insurance for automated cars will be far less than for manual ones, assuming they are actually safer. And by that point, the price will be low enough that anyone who could afford a car now could afford one. Potentially much lower, as there could be many changes to the shape and style of cars, and the controls, when they no longer need to accommodate a driver with minimal blind spots.

Please state the nature of the transportation emergency
I will never own or ride in a SDC, heck, I hate automatic transmissions and other nannyware “driver assists”, none of the cars I own will have any of this newfangled nannyware crap, if that means I have to stay with older cars, so be it, automation in cars should be distrusted

Driving should be a skill, one the driver puts 100% effort into, no cell phone calls, texting, Pokemon go’ing, pay attention to the road!

I agree entirely, which is why I’m 100% in favor of computer-driven cars, so the driver can do all of that while I’m spending my time the way I like.

As an example, here is a self-driving concept car from Mercedes.

The legal issues need to be sorted out. Obviously, the car has an algorithm for the situation of if you have to hit the other vehicle or the pedestrian, which one do you hit?

Lawsuits up the wazoo are waiting.