Hence the need for robust testing requirements to certify a design for autonomous driving use, and well documented configuration management that is transparent to regulatory and consumer oversight groups rather than just allowing manufacturers to deploy and modify control software at a whim. This is not some new concept; while commercial productivity software and smartphone apps have noted bugs and instabilities (because such testing is not required and not dictated by a mission-critical need by users), software components in critical applications such as aerospace or nuclear fission plant control is highly robust and extensively tested to assure statistically determined reliablity, often by actual “hardware in the loop” (HITL) testing using a full up integrated system to simulate complex interactions, and anomalies are addressed by regression testing (fixing an error and running back through the test regime to a pre-determined start point).
By doing this, we’ve actually created highly automated software systems of incredibly reliability. Every rocket launch vehicle that has flown to space in the last thirty years has had very complicated, highly redundant integrated software, instrumentation, and telemetry systems, and the amount of failures seen in final integration testing (much less flight) have mean time between failures on the order of thousands of operating years. The only software failures I am aware of in launch and space vehicles have been driving by cutting corners and not performing the industry-recommended regression testing protocols.
As for ‘hacking’, again, current vehicles with mostly insecure CAN networks are already quite vulnerable to malicious interference. One of the test requirements for autonomous systems would be to demonstrate security and authentication for any changes or external commands. The cryptographic technology for this is well developed and essentially unbreachable if implemented correctly. Compared to the other problems of autonomous driving systems, such as coping with trying to direct a vehicle to park in a particular spot, or going off-pavement, or other “fuzzy problems” that are difficult to create definitive rules for, securing such systems against attack is a matter of applying basic software security and abstraction principles. If a system does detect unauthorized access, an “overwatch” component shunts the vehicle control into a fail-safe mode until software integrity can be verified.
These hypotheticals about a computer needing to “make a decision” about who to save and who to kill assume some kind of moral decision-making on the part of the controller. In fact, the entire point of an autonomous driving system is to avoid getting into the kind of situations that lead to such events by maintaining far better situational awareness and reaction time than the best human driver, operating the vehicle within safe limits for the environmental conditions, and not being distracted or overwhelmed by information as a human driver can. Frankly, if a pedestrian steps out into moving traffic so unexpectedly that an automated drive system could not safely stop or avoid impact, it is manifestly unlikely that a human driver could respond more promptly or with better judgment. That’s not some kind of “logic versus morality” argument; that is basic physics and biomechanics.
Stranger