Driverless cars

I suspect that political/emotional “won’t someone think of the kids!” stuff will keep this from happening. People won’t trust it with their kids, and if there was even a single accident–no matter the circumstances–the whole program would come crashing down.

I bet not. The predictable route helps, but the real problem is that the route involves residential streets, where you’re going to get your unpredictable oddball events.

Rumor has it that there’s a natural market mechanism for dealing with this skew.

So many psychological problems.

The US, unfortunately, is dominated by people whose beliefs are based on anecdotes. E.g., “I heard someone bought crab meat with food stamps so all food stamp money is wasted.”

So the occasional tales of self-driving cars doing something stupid are going to dominate the actual overall statistics.

I can’t really draw it but picture this: Two Venn Diagram circles A and B. A is the set of accidents that humans get into. B is the set of accidents that current self-driving cars will get into. B is smaller than A! And it will only get better.

The problem: while A and B overlap quite a bit, it’s that chunk of B that’s not within A that is going to bother pretty much everyone, even most of the more rational folk.

And that area is also going to bother insurers, lawyers, etc.

Of course, but apparently either the school districts are unable or unwilling to pay the wage needed to attract enough drivers, or the market is exceptionally slow to respond to the incentives.

This is the assumption that is often made, and with any highly networked system, and particularly one utilizing wireless communication it is almost certainly wrong. For expediency and cost-effectiveness, automobiles often utilize a common controller area network (CAN bus) for both critical and non-critical functions. The various subsystems are separated into different virtual networks but without any kind of internal verification scheme it is easy enough to access a non-critical subsystem and use it to send spoofing signals that appear to be genuine critical control commands. This isn’t a hypothetical scenario; it has actually been demonstrated. Actual security of control systems that are network accessible requires not just a wrap-around security layer but some degree of built-in verification and separation of functions such that even if a malicious actor manages to succeed in an intrusion attempt they are not able to control critical functions or alter important data such as sensor information.

I actually worked on a development program a number of years ago for which this kind of internal robustness against attack was critical. I didn’t work on the software side but I saw the design and verification requirements, which represented a significant amount of what was already a fairly complex, real time, autonomous system, and without the requirement being imposed it is unlikely anyone would undertake that degree of effort for security against a hypothetical attack. The effort ended up being cancelled (for reasons not related to the security efforts) but in the chess game between system and intruder, the latter has the substantial advantage of only needing to find a single useful exploit, whereas the defender has to protect against all potential intrusions.

Stranger

Apologies if I’ve missed a post on this aspect:

It seems to me the biggest obstacle to self-driving cars is the indisputable fact that about 90% of drivers seem to think that they absolutely must be ahead of whatever is just in front of them.

This urge is apparently so strong that hundreds (thousands?) die each year trying to achieve that status.

Cite: any freeway near you.

Perhaps a self-driving car will have the opposite effect on these people. If someone’s in front of me (for example), then I have to pay attention to their speed, their deceleration, and quite frankly, it’s a more tiring experience versus being in front of them and having nothing but the open road ahead.

Give them an autonomous car, and there is no longer a tiring effect. The open road ahead is no longer a requirement for being relaxed in the car. All of the cars will (eventually) be cooperating to ensure that their passengers get to where they’re going in the quickest, safest manner.

You can already see the beginnings of this happening with the increased popularity of adaptive cruise control.

And if self-driving cars did communicate with one another to coordinate their driving, and were hacked, the results could be pretty damn devastating.

I guess I don’t see the compelling need for self-driving cars to communicate with one another. IMHO, they’ll either work without that, or they won’t work at all.

How about, they’ll work without it, but work better with it? That seems most likely to me.

There a number of reasons that autonomous cars would use wireless communications. Aside from the obvious, e.g. a channel for communications to be called or directed, they could communicate road hazards with each other in a mesh-type network or a more central road advisory hub, they could communicate intent at interchanges and on the highway, et cetera. There is also a need for communications for serviceability; most autonomous vehicles will likely be owned by transportation service providers rather than private individuals, at least in urban and dense suburban areas, and servicing a massive fleet of vehicles would be simplified by being able to interrogate and diagnose the vehicles in most need of or priority to return to service without having to manually connect them. There are other uses, such as allowing local guidance and control for parking in structures or making way for emergency vehicles; in fact, there is a vast number of reasons that an autonomous car should have external communications, and the only real downside is the potential compromises in security.

But even aside from deliberate external communication interfaces, many cars will start using wireless CANs simply to reduce weight, complexity, and failure points in wiring. Why string a mass of wiring hardness to the hundreds of sensors that an autonomous vehicle might use when you can simply have them “talk” across a very localized wireless network? But that introduces new vulnerabilities which require built in security and verification methods lest they be used for exploits.

Stranger

Without going back and digging up specific quotes, a few thoughts…

GPS-
My wife’s car has built in GPS. It is 3 years out of date. Not because we don’t know how to update it but because Ford wants $150 per update (about once per year). If self-drivers are going to need constant updates, somebody is going to have to change their pricing models.

It also adds another factor to the liability side of things. If an autonomous vehicle causes damage to life or property due to the GPS maps, whose fault is it? Owner, car manufacturer, map supplier ?

Safety-
Mostly because of costs I think that any integration of self drivers will take 20-30 years before the autonomous vehicles become the norm. They will be expensive initially and most people can’t afford to buy other than used vehicles. The interaction between human and robotic cars will go through many stages as the ratio changes. Any arguments regarding safety need to take that into account.

Capability of automation-
My wife’s car has a number of the automation gadgets currently being added to vehicles, including cruise control that will hold a set distance from the vehicle in front. From an AI standpoint it sucks. Should be one of the easiest things to automate about self driving. Two major drawbacks (from personal experience):

  1. Assume Interstate type roads. If you want to go let’s say 75mph and you approach a vehicle going 72mph, you don’t really notice your car slow down to hold the distance (that part the software does well). Only upon the realization you have slowed down, or by changing lanes and feeling the car speed up do you know that you have been traveling slower than intended. Okay, maybe I should look at the speedometer more often, but how many people scan their speed every few seconds when driving with cruise control on the interstate?
  2. You approach a car going 5-7 miles an hour slower than you so you merge into the fast lane, along with other traffic behind you who also see the slower vehicle. Before getting past the slower vehicle the road starts to curve a bit. Your sensor now is looking partially towards the other lane and senses the slower car and slows you down to get back to your set following distance. Except you are in the fast lane, with other traffic. This is even warned about in the owners manual. Only with prior human realization of the situation and intervention can you avoid this peril.

Maybe self driving cars have better sensors and software. At least until the manufacturers start cutting costs.

Hacking-
Apparently Elon Musk gave a talk at the National Governors Association. He talked about Tesla’s efforts to prevent fleet wide hacking of their vehicles. He was quoted as saying
“In principles, if someone was able to say hack all the
autonomous Teslas, they could say - I mean just as a prank - they could
say ‘send them all to Rhode Island’ [laugh] - across the United
States… and that would be the end of Tesla and there would be a lot of
angry people in Rhode Island.”

Cute, but also shows that the hacking problem is not something easily solved as they are still working on it. In my opinion it will have to be solved if robotic vehicles are ever to become ubiquitous. People have been trying to solve computer security for a lot of years now and it is only getting worse.

I also wonder what happens when self drivers become more common and people with too much time on their hands ( kids ? ) decide it is a great game to play “let’s prank the auto-car” with things from fake obstacles (a cardboard stop sign that keeps moving back), or maybe running in front of them (knowing they will stop), or how about that old paint ball gun… wonder how many sensors they can cover before the vehicle pulls over and stops, phoning home for maintenance.

MIT Technology Review has an article about why self-driving cars have so many sensors. It’s not a free sight so I can’t link to the article, but is uses a picture of advertising on the back of a vehicle. It’s painted on and looks like two bicyclists riding behind another vehicle. It looks to me like it could easily confuse human drivers and depending on the quality of the artwork makes things increasingly difficult for the software to sort out it’s version of reality.

And somewhere in the thread it was mentioned that it should be more important to get electric cars on the road rather than self-driving cars. It’s a whole different (spirited) discussion from this thread, but currently and into any foreseeable practical future those electric cars are going to be primarily fossil fueled. The best public relations for self-driving cars would be if it could be shown ( proved ? ) that they will/do greatly reduce energy consumption.

Electric vehicles will still ultimately be mostly fossil-fuel, at least for the immediate future. But when you look at the end-to-end efficiency, even assuming all of the electricity comes from coal, the electric vehicle still has a significant edge over the gasoline vehicle in CO[sub]2[/sub] emissions. What it comes down to is, in either case, almost all of the inefficiency comes in the step where you burn something to drive a heat engine, and power plants are more efficient heat engines than car engines are. Plus, of course, coal won’t always be the major source of electric power, and in many places, it already isn’t.

Electric vehicles at least offer the fungibility to use any of the variety of electrial power production methods. However, electrochemical battery technology has numerous fundamental limitations, particularly for long duration operation, and some form of liquid hydrocarbon fuels will likely be necessary for practical transportation beyond commuter travel for OTR applications for the foreseeable future. Hydrocarbon fuels can be synthesized from natural gas or other sources (including sequestered carbon), but ultimately the power has to come from some energy source, be it native hydrocarbons, sustainable wind and solar, nuclear fission, et cetera. But the technology of autonomously controlled vehicles is largely independent of source, although the use of autonomous fleet vehicles in urban and dense suburban areas may make more efficient use of electrically powered vehicles practical and desireable.

Stranger

Most school districts in Texas, and presumably in Iowa and other places, are strapped for cash. They are already using creative funding like charging larger and larger participation fees for activities like sports and band.

Balthisar is likely right. The need to be in front is driven by the need to be in control because you are the one driving. Other people’s actions directly influence your ability to go where you want when you want. However, that frustration is greatly diminished when you are a passenger, be it a private car, or public transportation. Road rage incidents will decline dramatically when people ride without having to watch the road, because people won’t take the actions of other vehicles as a personal affront. "YOU IDIOT, DIDN’T YOU SEE ME HERE? WHY DID YOU FEEL THE NEED TO CUT IN FRONT OF ME AND THEN SLOW DOWN TO TURN! "

And even if people do take the actions of other vehicles as a personal affront, it won’t matter if they’re not in control of their own [del]projectile[/del]vehicle.

Not sure I understand what you mean here.

Or the rider could just see what Waze is saying, and tell the car to take another route.

I figure autonomous vehicles will never fail to use their turn signals, and won’t exhibit indecision like a human driver might. Other self-driving cars would be able to ‘read’ them pretty well.

Tru dat, but that’s an auxiliary function, it’s really not about the driving itself.

*the only real downside is the potential compromises in security. *

I mean, holy shit, banks and credit card companies and major retail chains get hacked with incredible frequency, and of course, there was a recent election. But there’s a big difference between having bogus charges on your MasterCard, and having your car suddenly angle off at 15 degrees to the left and floor it until it hits something.

I see that, in a hack-proof world, there’d be advantages to having inter-car communications, but they strike me as good things to have, rather than necessities. And until we’re in that hack-proof world, a potential downside of self-driving cars is the sort of highway scenario people envision just after the Rapture, with crashes and carnage everywhere, only everyone’s still in their cars.

You just said that autonomous vehicles would be good about using their turn signals, and then said that you don’t think inter-vehicle communications are worth it. But turn signals are inter-vehicle communication. You can get by without them, especially if you have multiple-band sensors and millisecond reflexes, but they still make driving a lot better. And most of the inter-vehicle communications would be the equivalent of turn signals, just clearer and harder to accidentally misinterpret. It would be very easy to spoof a turn-indicator transponder signal, but autonomous cars would also be designed to not consider those signals to be gospel truth, just like human drivers now don’t rely on human-operated turn signals always being true.

Massive crashes and carnage are everywhere today – courtesy of human drivers.

Those human drivers are killing about 40,000 people every year, just in the U.S. Worldwide, they are killing about 1.3 million per year – equal to the deaths from the bloodiest battles in human history. Each year, more people on earth die from driver errors than from homicides and wars combined.

A female with good genetics could do the same by just lifting her shirt, and humans often are distracted by radios, cell phones or dropped coffee cups.

I do wan to note that nothing will ever be “hack-proof”, but credit cards are fairly trivial targets due to the business decisions to error on the side of convenience. They also tend to provide a monetary incentive so they are a bit of a special case.

If AI can improve safety and save resources by significant amounts, which seems obtainable now, society will probably consider the risks acceptable.

Humans just aren’t very good at evaluating risks, especially when it comes to “randomly distributed” risks, i.e. risks that the individual is not fully in control of, and that are widespread but diffuse.

Car crashes are far more likely than shark attacks - shark attacks are in the single digits annually worldwide. But many people are far more afraid to swim in the ocean than to drive a car on a busy highway, with the dumbass next to you staring at his phone while he texts away.

We humans blindly accept some risks while freaking out about others that are much less likely, because we get saturated by exposure so it is commonplace and therefore just a fact of life, but “it won’t happen to *me *.”

The risk of cars being hacked is there, but what that risk is and how it compares to the risk of cell phones while driving or slipping in the shower or crossing the street is a guessing game we don’t really have any way to decently assess.

It is certainly a concern that should be given attention from the get go. But corporate drivers for product development don’t always take all the proper considerations into account.