OK, this is about Tesla cars - but only because that is the only instance I know of this.
So the maker is now sending safety-critical (fuel, steering and braking instructions) to its customer’s cars over the friggin’ air.
I’m guessing that the owner needs to authorize the actual updating, but: how difficult would it be to spoof/piggyback lethal instructions?
e.g. “Next time you see the 43rd pedestrian within 40 seconds, turn wheels 20 degrees right and accelerate to 90 mph”?
IOW: next time you’re in a crowd, jump the curb and mow down everyone.
Is this technology really the optimal way to deliver such instructions?
To take it to its idiot level: You’re hospitalized with a nasty infection. There is an IV in your arm which can automatically deliver any number of drugs.
And it is operated by remote from the doctor’s iPnone 23.
Does this sound rational?
The updates are signed with Tesla’s private key, so it’s unlikely that any bad guys are going to sneak explosionware into your car. Of course, encryption keys can be compromised if they’re mishandled, but that’s just as true for cars who get their software updates at dealer garages, too.
The benefits of implementing improvements to a product through this method are unparalleled by any other method. On the other hand, your fears about the 43rd pedestrian scenario are unfounded, as the software logic simply does not work that way at all.
As long as the vehicle has the ability to identify (within X% probability) a “pedestrian”, the software COULD work that way - that is the marvel of computers - they do what the software tells them to do - process payroll, operate check sorters, run ATMs.
If a car can self-park, it can:
Select gear
Control speed
Control steering
Control braking.
If I can tell it to ‘park’ I can tell it to ‘turn wheel and accelerate’.
It has been many years since I had a blind confidence that ‘the computer cannot make any mistake’.