Why the Popular Resistance to Driverless Cars?

Hence the need for robust testing requirements to certify a design for autonomous driving use, and well documented configuration management that is transparent to regulatory and consumer oversight groups rather than just allowing manufacturers to deploy and modify control software at a whim. This is not some new concept; while commercial productivity software and smartphone apps have noted bugs and instabilities (because such testing is not required and not dictated by a mission-critical need by users), software components in critical applications such as aerospace or nuclear fission plant control is highly robust and extensively tested to assure statistically determined reliablity, often by actual “hardware in the loop” (HITL) testing using a full up integrated system to simulate complex interactions, and anomalies are addressed by regression testing (fixing an error and running back through the test regime to a pre-determined start point).

By doing this, we’ve actually created highly automated software systems of incredibly reliability. Every rocket launch vehicle that has flown to space in the last thirty years has had very complicated, highly redundant integrated software, instrumentation, and telemetry systems, and the amount of failures seen in final integration testing (much less flight) have mean time between failures on the order of thousands of operating years. The only software failures I am aware of in launch and space vehicles have been driving by cutting corners and not performing the industry-recommended regression testing protocols.

As for ‘hacking’, again, current vehicles with mostly insecure CAN networks are already quite vulnerable to malicious interference. One of the test requirements for autonomous systems would be to demonstrate security and authentication for any changes or external commands. The cryptographic technology for this is well developed and essentially unbreachable if implemented correctly. Compared to the other problems of autonomous driving systems, such as coping with trying to direct a vehicle to park in a particular spot, or going off-pavement, or other “fuzzy problems” that are difficult to create definitive rules for, securing such systems against attack is a matter of applying basic software security and abstraction principles. If a system does detect unauthorized access, an “overwatch” component shunts the vehicle control into a fail-safe mode until software integrity can be verified.

These hypotheticals about a computer needing to “make a decision” about who to save and who to kill assume some kind of moral decision-making on the part of the controller. In fact, the entire point of an autonomous driving system is to avoid getting into the kind of situations that lead to such events by maintaining far better situational awareness and reaction time than the best human driver, operating the vehicle within safe limits for the environmental conditions, and not being distracted or overwhelmed by information as a human driver can. Frankly, if a pedestrian steps out into moving traffic so unexpectedly that an automated drive system could not safely stop or avoid impact, it is manifestly unlikely that a human driver could respond more promptly or with better judgment. That’s not some kind of “logic versus morality” argument; that is basic physics and biomechanics.

Stranger

There might be edge cases where the car’s movement planner determines it can swerve into a collision with a non-human entity like a telephone pole instead, missing the pedestrian. But a telephone pole is going to harm the occupant of the car more. Even if the planning software doesn’t know that a telephone pole masses more and is rooted to the ground, unlike a pedestrian, the lateral forces from swerving sideways are more likely to cause injury than a straight on collision with the pedestrian.

It is true that these kind of cases would probably rarely occur in practice, but it would be possible to artificially create them doing the hardware in the loop testing you advice above, and you do need to arrive at an acceptable solution for this.

This is absolutely not true. In fact, most of the safety features that are not required on automobiles, such as passive impact restraints, occupant airbags, collision detection systems, impact ‘crumple’ zones, tempered safety glass, whiplash protection systems, et cetera, were all either options or novel features introduced by (some) manufacturers long before they were required by government regulations. Similarly, many manufacturers provide back up camera, lane change warning systems, tire pressure monitors, et cetera, even though they are not presently required. Certain manufacturers have a reputation for being unconcerned about occupant and pedestrian safety but that again turns to the need for robust testing of automated driving systems to assure that manufacturers are meeting the minimum guidelines to assure safety and reliability in their products.

Stranger

Or it can swerve to miss both the pedestrian and the telephone pole, and then continue on without incident.

Seriously, these kinds of “edge cases” pretty much assume the vehicle is driving along at some fantastically excessive speed on ice-slicked pavement with virtually no margin for error. The reality is that an automated system would be designed to operate with large control margins, including slowing down in inclement conditions that many human drivers would be oblivious to. With the comprehensive situational awareness provided by numerous sensors exceeding human perception capability and reaction times that are a couple of orders of magnitude better than the best human driver, the frequency of unavoidable accidents of this kind are vanishingly rare, and would become increasingly so as experience with automated systems is gained.

Stranger

Again, in the real world when you test systems, you create and test such cases. Over a large enough pool of people using these things, they will occur. The Therac-25 “glitches” only happened a couple of times in testing and no one was harmed…

For people who complain that their laptop goes wonky every month or so, there’s a big difference between an embedded system that is programmed to only accept a very small range of inputs and only generate a very small range of outputs, and a general purpose computing machine that is supposed to be able to do whatever you tell it to do.

That “whatever you tell it to do” is the root of all the hacking and glitches and bugs that we encounter every day with our general purpose computing machines. Malware is installed, not because it’s impossible to prevent malware, but because your computer installs things it’s told to install, and sombody somwhere told it to install that malware. Yeah, you didn’t expect to install malware because you clicked a link in an email. But the problem is that your computer executes arbitrary commands, and somebody programmed it to install arbitrary programs on command, and somebody figured out how to make a command to install the program look like a link to a webpage.

Embedded systems won’t have that problem. I have a programmable thermostat. I have never ever had malware on that thermostat, and it has never every been hacked. And why is that? Because the computer that controls that thermostat was never programmed to accept arbitrary input. It doesn’t have an internet connection, so the only way to reprogram the thing is to use the user interface on the physical device, or to pop open the case and start attaching wires.

If a malicious person has physical access to the control systems of your self-driving car, then if they knew what they were doing they could hack those control systems. They could also slash your tires, smash your windows, cut your brake lines, pour sugar in the gas tank, set your radio to gospel music, break your radio antenna, smash your headlights, replace your wiper fluid with battery acid, and so on. Yet we don’t call those things “hacking your car”.

Your self-driving car might need software updates, but if it does it’s not going to get them by emailing you a link that you’re supposed to click to take you to the car software update page.

As for worrying about how self-driving cars will eventually make manual operation of cars illegal, it’s going to be more like driving a team of horses. Can you drive a horse-drawn carriage down the city streets? Sometimes. But you can’t get on I-5 with a horse-drawn carriage. Nowadays getting a driver’s license is not difficult. The tests are calibrated so that any person of more or less normal intelligence and physical ability can pass the test. And that’s because not being able to drive a car is a major life obstacle. If you can’t drive, you’re a second-class citizen in most places, because you’re dependent on others.

But when/if self-driving cars become cheaper and safer than manual cars, that won’t be the case anymore. And then we can finally make a driver’s license what it should be: a certification that you really are an expert driver. If you’re one of those steely-eyed drivers with lighting reflexes and an encyclopedic knowledge of the law, then by all means drive your car manually for fun. The average guy who sleepwalks through his commute listening to the morning zoo and drinking coffee in bumper to bumper traffic? That guy can and must stop driving.

As for all the “Either crash into a bus full of nuns or knock a welder into a fireworks stand next to a propane truck, what do you do, hotshot?” hypotheticals, the real answer should be “neither”. Your car shouldn’t go around blind curves at a speed that it can’t brake for obstacles. Your car shouldn’t be following another car at a speed that it can’t brake if the other car slams on its brakes. In almost every imminent collision the correct answer is usually not “swerve out of the way”, it is “apply the brakes”. The reason humans have to decide to swerve or crash is because they didn’t start applying the brakes soon enough.

Also, if somebody really did promise you 100% reliability, then you should find that person and demand your money back, because that was a bullshit promise.

The only realistic promise is “better than the mean human driver”, because if a self-driving car can’t meet that goal then it shouldn’t be allowed on the road except as an experiment.

Maybe this is true for your particular model, but don’t generalize that to all “smart” embedded devices. “Internet of Things”-style devices are the worst possible example that you could have chosen; security on them is notoriously bad and already hacked IoT devices have been implicated in a number of DDoS attacks.

Unfortunately, for the driverless car case, it has to have a network connection.

It has to be able to update maps, update it’s neural net weightings from learning from other vehicles, and possibly send certain well defined messages back and forth to other cars in the area.

Yes, you can digitally sign the updates, and that helps a lot, but there is a risk that the part of the code that processes messages from the network and then checks to see if they have the right digital signature has a flaw. Realistically, the part of the car’s systems that does this communications and updating task is going to be some flavor of Linux that others may find a zero day that lets hackers get in.

Or the code that verifies the digital signature might have a flaw - Apple products had this issue a year ago.

So it’s not an isolated system. The logical fix is an isolation processor. This is a chip that goes in between the Machine learning automated driving software (which must be immensely complex or it will not work) and the Linux frontend that talks to the network (network protocols are very complex and ever changing, so a common and well supported OS like Linux makes sense). The isolation processor essentially does nothing but receive blocks of data for the well defined tasks mentioned above and verify the block fits the rules (size, formatting, etc) and that the digital signature matches. The isolation processor uses very simple, formally proven firmware for this. It might even be an FPGA, which inherently is less hackable than a microcontroller.

Some satellites use a security scheme somewhat like this.

And it goes without saying that the in-car displays which can access the web via browsers and so forth run some flavor of Linux and are on the un-trusted side of the isolation bridge. So hackers might eventually be able to put malware in that sticks a fake message warning of a crash, for example.

You might do what they do on the international space station and make the display in front of the driver be on the isolated side, and unable to access the internet or install apps or anything, and make the other displays in the car (the one in middle-dash, the passengers seats, etc) on the untrusted side.

By the way, I’ve recently changed jobs and am going to work at a company that does automotive electronics. I would like to eventually work on the automation software itself, though jobs on those teams are hard to get, maybe after I finish my Master’s. So I’ve thought about how to completely protect the systems inside from that Ford Radio ->CAN bus hack, and what I described above, using an isolation microcontroller or FPGA, is the least expensive and most practical way I have thought of. I don’t think anyone is actually using this idea yet, except possibly in satellites.

One place to put the isolation chip is right at the network interface - you recreate the TCP/IP stack in the FPGA as combinatorial logic. This is very labor intensive, but from a theoretical perspective, it is completely and totally bulletproof. No sequence of bits can lead to a fault or exploit (in theory). So the network transceiver chips - the ethernet PHY or whatever - connects directly to this FPGA, and the FPGA strips anything that is not a correctly signed message with a valid length from reaching any systems beyond it.

I read about a company creating this a few years ago and they were acquired by a satellite manufacturer. In principle, if you tie this FPGA to the serial lines coming from the receiver in the actual hardware, you can guarantee no one can hack your satellite who does not have the valid codes.

I’d change that to the only realistic promise is “much, much, much better than nearly all drivers.”

The very best humans have around a 3/10ths second reaction time. That’s a very long time. The very best humans have only two eyes, facing forward, that are only sensitive to a narrow band of the EM spectrum that is often obscured by smoke, fog, or darkness.

I consider myself to be a much better than average driver, and most of the time, I actually like driving, so in some sense, I am apprehensive about the idea of the switch to self driving cars. I am left with a sense of loss with the idea of never driving again, but at the same time, I do look forward to not needing to drive all the time.

I am on board with the idea of, once driverless vehicles become as good as the average driver, and their cost is in the reasonable range for a car, raising the requirements for driving manually. And as the cars get better, the requirements go up as well, until only the very best human drivers are in the mix with autonomous vehicles. That way, if you are really as good a driver as you think you are, you don’t need to give up control, but if you are one of the dozen or so drivers that I watch nearly kill themselves and others on a daily basis (due to distraction, influence, fatigue, or just being very bad at driving) then you losing control makes everyone, nicluding yourself, much safer.

I expect extremely rapid adoption for a different reason. Day 0 that a legally purchasable, fully automated car is available for a not too insane price (I could imagine it might cost $200-$300 a month for the subscription to the autonomy service + liability insurance, and it might have a 20-30k premium on the base vehicle), you can start making money right away renting it out as a robotic taxi.

The extra lease payment from the more expensive base vehicle, and the monthly cost for the software subscription + liability insurance (the insurance is what is protecting the manufacturer financially when their software causes a crash) is going to be a lot less than paying a couple of cab drivers minimum wage to sit in the car 24 hours a day. (automated cars don’t take breaks and they don’t need to run the engine to keep the AC on)

As more and more of these things hit the streets, the price to rent a robotaxi would drop from competition. Eventually, you would expect rental rates to approach the marginal cost + modest profit for the owner.

Borrowing someone else’s robo taxi for an hour a day, at cost + 20% profit is going to be cheaper than owning and operating your own car, at least for most people. Sure, if you drive an old car you fix yourself, and you drive it often, your own car will be cheaper, but I think it would quickly be cheaper for most people to rent one.

This is why you’d see a lot of these things on the road. With a large pool of vehicles, the manufacturers would be able to greatly improve the technology, and presumably they would get much cheaper and safer over time. What you might see is that most households that currently have 2-3 cars only have 1-2 instead, just pulling out their phone and summoning one as needed.

And while it would put a lot of drivers out of work, I think this would also lead to a lot of new businesses being created. You could design a delivery version that can shove packages out with a robot arm on a tray, and also pick things up, and then a compatible “robotport” that is like a secure mailbox locker you can stick at the curb. Automated delivery vehicles would be able to go to the locker, remotely request it to open, and then drop things off or pick them up. New forms of crime, as well - you could probably hack those and steal stuff until they patch it.

What do you think happens if the computers crash or freeze on a modern airplane? You think the pilot just has to “muscle” the controls a little harder?

Oh, I absolutely agree with that as well, and I don’t mind saying that owning a car is a stressful headache. From the day you buy a car, it represents a money sink that will continue to suck down money until you invest in another money sink.

I would be happiest having my personal car that I use for some trips when I feel like driving, but if the cost is right, I would much prefer renting a self driving car for my daily work commute.

Of course, on day one of a reliable licensed self driving vehicle, all the truck drivers are going to be losing their jobs very rapidly for the same reasons. As well as delivery drivers, chauffeurs, and any other professional driving service.

Some airplanes, that is possible, but you are correct in your implication. Also, I read that at high altitudes, a yaw stabilizer failure will lead to an unrecoverable loss of control. There are multiple isolated computers and servo systems for this, and you are supposed to reduce altitude if one fails.

The systems in aviation and aerospace are much more robust, but their development time and cost are much, much higher than consumer electronics. Also, they also undergo periodic extensive inspections and maintenance. I don’t trust a company like Ford to be as rigorous in developing software and I don’t trust drivers to be as judicious with keeping up with the expensive maintenance required.

Thing is, banning manual operation is generations out. Not going to happen any time soon, even if it can be shown that driverless cars are much safer, simply because people won’t stand for it.

What instead will happen is that insurance rates will creep up, and licensing difficulty will creep up. Still, insurance rates might not rise that much for manual operation if we can force the bottom 20% of drivers off the roads. As was pointed out, the mean driver is terrible, but the median driver is much better. Get the little old ladies, the drunks, the texters, the tweakers, the absent-minded professors and the teenagers off the road and manual operators are much much better, since the bottom 20% of drivers cause 80% of the accidents.

But if you get the teenagers off the road, what happens? Nowadays getting that driver’s license is a teenager’s entry into adulthood. Without it you’re a second-class citizen. But if you ban crappy teenage drivers, and only the dedicated and skilled learn manual operation, then you have a generation of kids who grow up and never learn to drive. And it’s that generation of kids that are going to start passing laws to make manual operation the equivalent of using a horse-drawn carriage or smoking cigarettes in a day care center.

So you don’t have to worry about this any time soon, but 40 years from now when you’re in the nursing home, watch out.

And even good drivers have bad days. A few weeks ago, I was sick as hell, and driving to the doctor, then to the pharmacy, then back home was some of the most stressful driving I’ve done in quite some time. It was very difficult to maintain concentration, and I was probably as dangerous as if I were over the limit on alcohol.

I’ve never drank and drive, but that’s because I have always planned things so that on the occasions when I drink, I have no need of going anywhere after. (I did once drive after 2 beers in nearly 3 hours along with food, and while I am sure I was under the legal limit, I could definitely tell how it affected my abilities.)

Being able to switch to auto, or just call a rent-a-car, would make those situations when even a good driver is not safe much easier to avoid.

This, and other technological excuses.

I’m a kid of the '80s, still loves technology, computers, the Internet, message boards (:D)…

I simply do not trust technology to be trustworthy in every given situation. Years ago, I drove my whole immediate family back from Lehigh Valley, Pennsylvania to South Jersey. The biggest thunderstorm I ever saw engulfed us, trucks swerving on PA Turnpike, a few turned over in the ditch, sudden deep puddles, windshield wipers did not help. I don’t think I went over 15 MPH for at least 80 of those miles. (And this was a high-end Lincoln I was driving.)

This driverless car that MAY be trustworthy for me is one where all it’s info is gathered and stored in the car, not on some network someplace. If it relied on the weather channel or got the zip code wrong over the net, what wouldn’t stop the car from resuming normal highway speeds and driving techniques during that rainstorm?

There was the infamous fog that stretched for miles across CA some years ago. Car after car slammed into each other. Headlights didn’t matter. How much of a fog can a driverless car handle?

If my tires are low on pressure, but I’ve got diarrhea, will this car automatically pull into a gas station and not drive until the tires get air? :smiley:

I don’t see me ever getting in a DC unless this has all been worked out. What I guarantee will happen is hundreds of stories of these cars losing control for no reason, getting hacked, etc. for a long time.

The thing about the sensors on a driverless car is that you can program the car to never exceed the limitations of the sensors. So if the lidar is blocked by fog such that it can only detect a car 50 feet ahead, the car will drive at a speed that it can stop within 50 feet. Humans are terrible at this sort of calculation, computers aren’t. And if you’re upset that your driverless car is creeping along at 20 mph in pea soup fog, then stay the fuck home.