Tesla Self Driving Car fatality

Okay, that makes sense. Wireless hacking is theoretically possibly, but so far hasn’t happened, and seems fairly unlikely based on how the system is set up. As an average person, I wouldn’t be too worried about hacking. I’d guess that politicians or others that fear assassination attempts might take the small theoretical chance of hacking into account and weigh it against the otherwise increased safety of cars with automatic driver assistance.

Wait, there are self-driving robot toads now? I remain skeptical.

Actually, the vast majority of security holes come from ignorant users giving away access codes, unlocking permissions, and doing other things that even a general purpose operating system shouldn’t allow. Locking down a dedicated embedded real-time operating system is not that difficult, and Internet or wireless secure communications using modern public key encryption systems are, barring intentional security flaws to leave a backdoor or the advent of quantum cryptocracking systems, can be made so secure that even a dedicated effort to introduce malicious code into the system would be an exercise in futility. In the extreme, a synchronous digital signature verification system could be permanently programmed into the embedded hardware (one time programmable read only memory) which would check and verify any attempt to upload new software to the system against a certified authority. Making the system robust against man-in-the-middle intrusion (e.g. introduction of a vulnerability into the software or hardware by someone involved in production) is a process engineering challenge, but aside from this you could make a secure embedded system absurdly secure. There are many, many challenges with assuring the robustness and reliability of software, but actual security isn’t one of them provided you control the source verification and operating hardware.

Stranger

Actually yes, on a modern car, you’re far better off driving into a low barrier than a high (windshield height) obstacle. The crumple zone does nothing if you drive into a high barrier, and the passenger cage isn’t designed to withstand that kind of impact.

But more importantly, if this trailer had side guards, the Tesla’s radar and camera would probably have detected it.

Well, they’re not for everyone. I hear they give a pretty wild ride.

Reuters reported that a portable DVD player was found in the car. So even if it’s not possible to watch a movie on the Tesla’s screen, the driver might have been watching the Harry Potter movie on the portable DVD player.

Security which depends on untrained end users for enforcement isn’t security at all. It’s an acceptable tradeoff for most purposes–the advantages of general-purpose computers outweigh the security risks most of the time. Obviously this risk isn’t acceptable for safety-critical systems like on a car.

Not so extreme. Just about all field-patchable embedded systems do (more or less) this already. Gaming consoles, phones, tablets, and indeed Teslas use signed firmware. It’s easy enough to get right that I’m not aware of it having ever been the avenue of attack. The signing keys are easy to secure since very few parties need access to them; the released image doesn’t contain the information needed for an attack to work. The CPU will not execute the image unless it passes the sign checks. No information on an external pin will ever be useful in bypassing this (though with nation-state resources, you could conceivably physically modify the CPU to ignore the checks).

Yes, wireless hacking has occurred…

I’m not talking about single signed firmware but an actual external verification server which validates both the upload and the hardware, making it essentially impossible to even reverse engineer a firmware upgrade. The effort required to ‘hack’ such a system would be beyond any conceivable effort.

Stranger

I read some where that a blinding due to Sun angle/reflection or some such happened and a visual sighting was near impossible.

Nothing about why the radar or other systems did not detect it.

I personally am glad it is not common yet because the human factor will never be anywhere near good enough if there needs to be human attention.

My mileage is gonna stay on the back roads for the foreseeable future.

Fair enough, though overkill for this situation. Arguably, making it impossible to reverse engineer the firmware could result in a decrease in security, by making it more difficult for legitimate security researchers to investigate side-channel attacks. Verified hardware is interesting, but again not particularly useful in this situation.

If I understand the Jeep hack correctly, they thought they could get to authorized firmware indirectly, through a different unprotected system that connected internally to it.

Verification of hardware prevents users from modifying hardware (so-called ‘hard hack’) and then gaining access to firmware source. The transparency argument cuts both ways; yes, closed and secured hardware and firmware makes it difficult for independent researchers to evaluate vulnerabilities, but the same is true for malicious programmers. Any sufficiently complex system will inevitably have vulnerablities of some kind, especially if the malicious party can gain physical access to the device in question, and one way of preventing that is to lock the system down sufficiently that it is a virtual black box which has a very constrained set of interfaces. Companies may value Linux and OpenBSD for the community-supported robustness testing, but when they implement an actual Internet-facing server, they shut down all but the necessary external ports and then require authentication for any users or interfaces trying to access the system to limited the potential for probing attacks. Given that the hardware in this case is essentially available to anyone who purchases a vehicle (no degree of physical security will prevent a determined hacker from being able to get to the controller), a hardcoded limitation of the hardware to both verify firmware upgrades or physical modification is prudent and not difficult to implement.

From the article that you linked to:Researchers discovered an opportunity to change firmware of the V850 controller for their maliciously crafted version through the connection to multimedia system’s controller. This firmware ‘upgrade’ can be done without any checks or authorizations. Even if there was authorization, researchers have found a couple of vulnerabilities that make possible taking control over this V850 controller.
Automobile controllers are vulnerable because manufacturers didn’t anticipate that someone might attempt to maliciously exploit them, which again, argues for a need for a regulatory framework which requires specific security measures. This is not a new issue; aircraft and satellite manufacturers have been quietly dealing wih this for years by developing secure control and communications systems. There are several encryption systems being developed for implementation in CAN bus architectures to make them robust against third party intrusion or control. There is nothing particularly challenging about this; it just requires engineering discipline and thorough test verification.

Stranger

[QUOTE=Mr Albert Hammond on 15th September 1830]
Someone has already been killed By Stevenson’s locomotive. These killer robots are not the future…
[/QUOTE]

It’s not a new problem, is it?

This article speculates on a Tesla blindspot. The radar only sees things below the windshield. The bottom of the truck was too high for radar to pick it up. It true, they gotta fix this fast!

Cars have been driving (almost) under trucks and and trailers for decades (See: Jan and Dean).

Maybe “Because robot cars can’t see them” will be the point which gets something done about it.

Then we can work on “Because you’re invisible in rain/fog” to do something about Silver color on cars - the “your ass is on the line” will work for small cars; larger ones may require more overt convincing.

He gives a good solution in that article.

I wonder how well Tesla sees animals? Like deer or a cow in the road? Deer cause many bad wrecks in my state.

I think, after these wrecks, Tesla should temporarily disable Autopilot until they have a system in place that insures that no object can impact the windshield area undetected. This could just be a software update, or it could require an upper radar unit, or perhaps the new camera setup will help.

The “fix” for this could have cars stopping (default “solution”) when “seeing” an overhead sign.

A semi trailer up close has much the same geometry as an Interstate sign at 400 yards.

I’m guessing the truck driver did not see the car and turned immediately in front of it.

Approaching the “thing-which-turned-out-to-be-a-truck”, the cars saw a motionless object.
When it turned, it saw “someting-darting-across-lane” (the cab of the truck) followed by something overhead consistent with “overhead sign”.
None of these things signaled Emergency Braking.

I’m guessing the final outcome:
Trucks can kill cars if they turn across the car’s lane at the last second.

The (non-)driver could not have done better than the robot.

I do hope it makes Tesla stop referring to its “driver assist technology” as “Autopilot”.

When your robotics can handle all the same kinds of situations a real Autopilot (see: Boeing and Airbus products), call it an “Autopilot”.
For now, it is “Driver Assist Technology”.

That’s a quote. I can’t add the tags in Tapatalk

“No Object”?
If that is your criteria, expect a really, really long wait - some idiot with a quadcopter will still get through.
Let alone a POTUS with Hellfire missiles…

How about something falling off the truck ahead?

95% can be easy. 100% is almost always impossible.