Every modern car is doing it. If the screen is going to display a rear-view camera when it’s shifted in reverse, it needs to know the gear selector position. Which the driving systems also need to know. They’re all connected. The only variable is how effectively the designers firewalled the different areas.
This.
And what little I’ve read of this, the degree of firewalling up until the last couple of car model years has been very, very low. I don’t know that the situation has improved in these last couple of years; I simply haven’t read any reports dated more recently than that, so have no more recent info to share.
IIRC @Machine_Elf is in this line of business. Perhaps he has some useful insider anecdote to share?
Have you looked lately…? Official Pi retails still sell at MSRP, but they get immediately snatched up and put on the secondary market for several times the price.
Because security is hard. Here is just an example I pulled up from Google. I remember when this happened.
If that’s too old, here is one from this year.
SUCCESS - David Berard (@p0ly) and Vincent Dehors (@vdehors) from Synacktiv (@Synacktiv) used a heap overflow and an OOB write to exploit Tesla - Infotainment Unconfined Root. They qualify for a Tier 2 award, earning $250,000 and 25 Master of Pwn points.
Maybe the Tesla one could be exploited into enabling full self driving with a destination the users can’t control. The Jeep one definitely could cause the car to come to a complete stop right where an ambush is waiting on a mountain pass. (Fortunately there is a bicycle on a hitch-mounted rack, so the hero is able to get away in a dramatic downhill chase, that cuts switchbacks and jumps unsuspecting motorists.)
Basically everyone is vulnerable:
Using only the VIN (vehicle identification number), which is typically visible on the windshield, the researchers were able to start/stop the engine, remotely lock/unlock the vehicle, flash headlights, honk vehicles, and retrieve the precise location of Acura, Honda, Kia, Infiniti, and Nissan cars.
I know stuff about engines and drivelines, but my knowledge of things like the entertainment/electronics/instrumentation comes from what I read on the internet and what I gather from conversations with a good friend who works in that area.
That said, from what I understand of the situation, your take is accurate. The electric parts of cars have gradually become more electronic over the years (e.g. you don’t flip a switch to turn something on anymore, you flip a switch to ask the computer to ask that thing, via the CAN bus, to turn on), and cars have also gradually become more and more connected to the outside world via the cellular network. Even the non-electric parts, like brakes, throttle and steering, have become fully (or heavily) electronic to facilitate driver-assist and/or self-driving features. The people designing these systems didn’t come from the world of IT, where network security is a central feature of the job description. They’re automotive electronics experts, and their job has always been to just make the stuff work on the road, so network security never really became an issue until very recently. I think the Jeep-kill test that @echoreply linked to was kind of an eye-opener for people. It’s drawn attention to the issue, and while car network security is almost certainly better now than it was back then, no security plan is perfect, and new threats keep being created. This is why we keep getting updates to browsers and PC/phone operating systems, and cars likely won’t be any different in the future.
Bump
I thought the point of chrome books is that the operating system is read-only, and it’s hard to hack them. They are commonplace and many of them are cheap.
What I thought was especially interesting about that article is that for at least some brands, what they were talking about was that the car is connected to a customer account on the manufacturer’s website. And the hackers could get the account login info out of the car, then simply attack the corporate network with the end-customer credentials working from their own computers.
So they could not do anything “interesting” to my car, but they could use my car to get my creds to then hack the manufacturer’s customer-facing website and from there hack into the rest of the corporation.
That sounds like plain old crappy security on plain old land-based or cloud-based IT. With the sexy cherry on top of snagging insecurely stored creds from inside the “wallet” in my car. And every other car of that brand. Of course since auto industry managers and execs tend to drive own-brand cars, hacking into vehicles found in the covered spaces near the door at company HQs would be an especially fruitful attack plan.
Still not good. But better than being able to reconfigure or take control of some aspects of the car itself.
To be sure, that is still possible
.
Here’s a recent example of a local hack:
A long read but fascinating for security-minded folks.
In summary, though: some guy found that “vandals” had screwed with the trim around his front-left headlight. Shortly thereafter, the car was stolen.
The headlights on this car use the CAN bus (the network technology that basically all cars use for communications). It’s on the same bus as the smart key. In essence, unlocking the car is as simple as sending a “unlock doors” message with a hack device connected to the headlight wiring. There’s a small complication, which is that the real smart key unit is still telling the controller that the car is locked. The immobilizer circuit might decide to shut down and do nothing in that case. But the hack device overwhelms the bus, preventing the real device from communicating. So the only messages that the gateway receives are the fake ones.
There’s lots more detail in there, and I smoothed over some of the details, but that’s the gist of it.
Great read. Thank you.
The short version is that every bit of the control and data on those busses should be digitally signed by the senders & validated by the receivers. And hardware measures to make it difficult to block the bus should be implemented. And there should be greater partitioning between “rings” of privilege; so e.g. headlights and key fobs are not equally trusted / trustworthy components as they are today.
Sounds like time to consign the inherently insecure CAN bus to the dustbin of history and switch to something from mainstream IT that’s already fully validated secure by design.
My understanding is that such computers have a “steady state” configuration so that if anything goes sideways they only have to be turned off and turned on again. Good as new.
In their defense, this is something lots of industries have struggled with. It used to be that all security was physical. There’s no need to have extra security inside the bounds of the physical system, since if the baddies have penetrated that, all bets are off. But the reality has gotten more complicated. Defense in depth is the new thing. And that means zero-trust, principle of least privilege, and other modern design principles.
I don’t claim to have ever designed a car, but, in general, components can exhibit unspecified erroneous behavior because they are malfunctioning for some reason, not only because they have been hacked. Surely control and sensor inputs need some degree of authentication and validation in any case.
Agreed.
Car of course are expensive portable devices routinely left unattended out in public. And always have been. I don’t think history has recorded the very first car theft, but it wasn’t long after the first car was parked.
Even the current CAN bus designs and bolted-on security mitigations seem to be mostly security by obscurity. That ship sailed when the WWW was invented.
Similar design battles are begin fought inside airliners now although I don’t know the details and I do know that there’s a much greater degree of airgapping between the stuff that really matters and the stuff that serves up movies over the wi-fi to the folks in back.
There’s not been a credible hack even of bench test hardware that I’ve read of. There was one that got a lot of publicity but it turned out to be a hack of a PC “game” that was used as a crew training tool for FMS operations. And, as a PC game, had no particular defenses against keystroke injection which feature Windows provides by design to support assistive technology for the handicapped.
CAN bus messages include a CRC (cyclic redundancy check)–that is, extra information that validates whether a message has been corrupted or not. It does the job just fine when the errors are from “normal” sources, say a voltage fluctuation. But they’re insufficient against a malicious adversary.
LSLGuy is right; they need to use fully encrypted communications and cryptographically validated modules. Unfortunately, this has the downside of making end-user repairs more difficult. If a baddie is prevented from swapping in a hacked module, then a normal car owner will be prevented from repairing a broken headlight by swapping in a new one. One of the prices we pay for security.
I agree, but the counterpoint is that very cheap microcontrollers can speak CANBUS and also run an electronic switch to control power to a light, or however it actually works. Cryptographic authentication and signing is well beyond these chips.
As we all learned over the last few years, automotive chips are old, and desperately need updating nobody wants to pay for. Hopefully any kind of new system will be better.
I definitely support right to repair, but I do understand Apple’s point that letting anybody swap in any fingerprint chip will allow replacing valid units with a unit that always says “yes, that was the correct finger.”
Hardcore computer people can barely handle PKI, and often get it wrong. I can just imagine the disaster of needing to use my private key to sign a new taillight before installing it! Or better yet, the guy I sold my car to needing my login because the wiper motor burned out.
I do not have a solution, but I find the whole thing pretty funny.
Surely it shouldn’t be the sensor that handles the authentication. It should just be a data source.
Plus right to repair could include just making Apple sell replacement parts at market price. They could cryptographically sign them. It could include only allowing certain trustworthy companies to sign them.
Heck, that’s something they should be doing with USB devices. You could have secure keyboards and mice and thumb drives with signed keys. Sure, it probably couldn’t be turned on by default right now, as you’d want to support older USB devices. But it could be a way to make sure that these types of devices are secure (as long as you don’t run any software on them, of course.)
There are ways the annoyance could be reduced–imagine a cheap signing device that you input your private key to, and which you then plug into any peripheral you need to be authenticated. Press a button and it does the required key exchange. Press another button and it clears the key from an existing component so you can resell it. As long as everything uses the same protocol, it should be straightforward. But I can’t imagine this happening without legislation.
As to lamps, it’s important to distinguish between the lamp and the lamp controller. At least for simple lamps, changing the dumb bulb remains easy and user-doable at no great cost. Changing the lamp controller is where you get into problems where it needs to be a signed module authenticated to the car at install. And authenticated in a way that a thief can’t accomplish even in a couple hours with sophisticated tools while your car is sitting in the parking lot while you’re inside at work.
Clearly not IMO. Every sensor must be either fully cryptographically secured and therefore trustworthy and trusted or else all that sensor’s outputs are fully untrustworthy garbage.
If any sensor is not fully trusted, it can be used to inject garbage and induce out-of-control behavior. Tell the car it’s going slow when it’s actually going fast and you get out-of-control acceleration. Tell the car you have oil pressure when you don’t and you get slagged engines. etc. Telling the car the left turn signal is on when it’s not may not have hugely damaging direct consequences, but it’s still bad news.
Even in the given example of a fingerprint reader, you want authentication and encryption. This came up just recently:
The money quote:
The ability of BrutePrint to successfully hijack fingerprints stored on Android devices but not iPhones is the result of one simple design difference: iOS encrypts the data, and Android does not.
Without encryption, it was straightforward to brute-force the unlocking. They were able to feed in an entire database of fingerprint data digitally. It did depend on some other bugs that bypassed the rate limiting, but regardless, Apple isn’t subject to this because there’s no way to inject fingerprint data in the first place.