Will self-driving cars be dangerous due to computer hackers?

Exactly. And I imagine a support contract/warranty may be a legally compulsory requirement for operating a self-driving car.

This, I’m not so sure about - people will want to interface it with their tablets, laptops, smartphones - so they can send a route to it, or download energy efficiency stats, or retrieve journey history data, etc.
Those are the kind of interfaces that can be open to attack - I’m sure they would be designed with security built in, but nothing is ever perfect - and it’s sometimes possible for an intruder to use what seems like quite a limited interface to inject a command or some such that allows them to gain more control of the system.

Yes, like it or not these cars will be available for access through normal internet devices of the time.

People will expect to be able to use their handheld devices for remote starting, to check fuel, to monitor temperature and cool the inside before they get in. Many more things that probably aren’t obvious now.

One of my techs owns a nissan leaf and constantly complains that such interfaces are read only and do not allow him to peforms certain tasks he could do if he had write access. At this point only the dealer does.

And a fairly simple task to key the app to the vehicle much like theft resistant stereo systems are now.

Embedded systems are often built with very tightly restricted write permissions. Like tight to the point of making windows look as secure as a paper bag.

About one million people a year, world-wide, die in vehicle accidents. By that standard, self-driving cars will be relatively safe, if we are allowed to transition to them.

But there still would be a lot of lost lives, with tremendous publicity given to extremely rare accident types that were less likely before – perhaps including software security breaches.

Give the psychology of illusory superiority, most people may mistakenly think that they are better drivers than the computer. Getting people to give up the wheel will not be easy, and I think the OP illustrates this.

Sure, but he’s presumably only trying legitimate methods of access; he’s not deliberately trying to overrun a buffer, or whatever it is that people keep doing that means common desktop operating systems need patching for things that were previously thought secure.

Besides, they won’t be read-only in the future, because people will want to send data to the car - send routes, send custom maps, set preferences for driving style, ride comfort, and so on. These channels of communication can sometimes be broken in a way that opens up a big hole in the security.

A bit of topic, but when the idea of self driving cars is a reality will we be able to send them on their way without a human passenger? SayI’m moving and I have more cars than I have people to drive them. Instead of getting a trailer will they be able to just drive themselves to the new location?

And back on topic, if that is possible will a hacker be able to access your car and steal it by having it drive itself to them?

Even now, hackers have managed to take control of cars.

Admittedly, these require physical access to the car, but the MP3 attack vector would be very much like the OP’s concern.

Read up, what is the Autonomous Car in the first place.

I am more worried about actual real people driving cars.

Most accidents are due to humans driving the car; drink driving, smoking, texting, speeding, etc….

An extremely small amount of accidents is actually due to mechanical failure.

However, your self-drive car’s internet connection has nothing to do with your browsing the internet.

We already have this automatic parking in some cars – its basically your car driving autonomous…… the thing you fear.

I think you’ve just forseen the future of car theft :eek: Why bother smashing windows when you can just order the car to drive itself to the chop shop?

This gives something like Stuxnet a whole new avenue. Imagine if an Iranian nuclear scientist died in a car accident?

Another possibility would be anti-corporate warfare. Imagine some eco-warrior causing all Exxon-Mobile tankers to burn in a fiery crash.

That said, I’m totally for self-driving cars because they would still be way safer than human drivers.

This is what worries me. I can imagine some scenario where law suits make it near impossible for companies to create driving software even if it’s shown to make the roads safer. On the other hand the costs could be covered by the insurance companies; i.e. they shift the payouts from the many accidents caused by humans to the few accidents caused by software/hackers/etc. It’s an interesting topic (to me).

But in a car’s operating system, it is relatively easy to have a separate “code only” memory separate from the routes, maps, &c. There are certain problems that problems for home computers only because they keep code and data in the same memory space.

It’s not perfect, but non-read only car computers can be made to be a lot more secure than desktop PCs.

Where you keep the code and data is only part of the picture - exploits don’t necessarily attack things by location or architecture - they’re often based on doing something unexpected that causes the system to malfunction - and to a malfunctioning system, the distinction between code and data may no longer exist.

A good design for an automated car would use isolation. The software for driving the vehicle would use a physically different computer from the one that does the maps. That’s how current cars generally do it : the nav system is a different circuit board than the ECU.

The safest way is to prohibit remote software updates, and to not connect any of the car’s radios to the computers that drive the car. However, in practice, you can be almost as safe by writing the code that handles incoming data very carefully, and not allowing any updates to the car’s software that are not signed by the manufacturer of the car.

Teslas already do all this. There were news articles on how they remotely pushed an update to all the model S vehicles, and the computers in a Tesla control critical driving functions that would allow you to crash the vehicle if you were a hacker.

Security doesn’t have to be perfect. Someone could always cut your brake lines or put a bomb on your current car if they were out to kill you. Computer hacking is just another means, and it just needs to be hard enough to do that it rarely happens.

Except in current cars, the nav system is linked to the rest of the car via the driver.
In a self-driving car, the nav system is the driver - it has to be connected, or the thing won’t work.

But the separation of code and data is something that can be done physically. Just keep the programs in a ROM. No amount of malicious programming or software bugs can force current the wrong way through a diode.

With special purpose computers, you can do more than rely on the OS for protection of separation.

This blocks “in the field” updates - which has the effect of making both hacking and legitimate upgrades more difficult. So if a problem is discovered in firmware the manufacturer has installed, it’s a lot more difficult and expensive to fix it.

The inconvenience this would introduce is so high that it’s hard to imagine this approach being used unless/until hacking is shown to be a significant problem. (Which admittedly could require only a few well-publicized incidents.)

No, but you could conceivably still fall victim to attacks that divert the point of execution somewhere else.

I’m not saying it can’t be designed to be really secure; just that today’s notions of ‘really secure’ sometimes turn out to be tomorrow’s surprises - and the very nature of surprise is: something happens that was completely unanticipated, or previously thought impossible (not impossible in the sense of current flowing the wrong way through a diode, but rather, some other exploit or workaround)

Would this be any more expensive than changing out firmware now? Any updates to the embedded software an ECU require a trip to the dealership to apply, but no one seems to be clamoring for the convenience that being to push updates directly to your garage via wifi would allow.

It is completely possible to make robust, well-tested software that will work for more than a couple of years without an update, and it is routinely done for cases when updating would be onerous. It’s just that for most home PC software the increased development costs are not worth it when you can just push out updates in response to bugs found in the field.