Tesla Self Driving Car fatality

Why are you assuming the driver was even awake?

Therein squats the toad. If someone cuts you off, shouldn’t you (or your car) be aware of it?

We don’t know the exact details of the accident. There may not have been enough time to stop, especially when human reaction time is taken into account. T-bone accidents happen all the time.

He’s complaining that they hadn’t already done so. Which is equally silly.

The NHTSA opens preliminary investigations any time something remotely unusual happens. It doesn’t mean much by itself; it could end almost immediately with a “nothing to see here” conclusion. Or maybe not.

It will never end in a recall, since Tesla can do over-the-air updates. At worst they just disable their autopilot if it’s decided that it can’t be made safe enough for NHTSA standards. But far more likely is that they just add a few more warnings and driver alertness checks.

If the driver is paying attention, and has to manually override once every 130-million miles, what are the chances he will remember where he brake pedal is?

I’m working on it steadily, but I’m first trying to perfect my virus that causes children’s brains to swell up and explode out their ears when they have temper tantrums. We mad scientists and subversive engineers have a lot on our plates these days, and since the fall of the Soviet Union getting consistent funding is tough.

There are two fundamental problems with autonomous/driverless technology; the first is regulatory, to wit, there are no standards by which to evaluate and qualify automatic driving technology, and quite frankly state legislatures and transportation departments are in no way capable of legislating or regulating ‘good’ technology. In states that permit autonomous vehicles, there is little in the way of test or other verification of the adequacy of such a system. In California, you could essentially cobble together a few Arduinos to control servos for steering and throttle and hack some JavaScript autopilot, then apply for a CADOT Autonomous Vehicle Testing Permit, which has no specific requirements. The United States National Highway Traffic Safety Administration has released guidances to states on testing and on-road use of automated vehicles but that has no legal standing and is lagging behind the actual technology such that it is of limited practical use.

The second is an implicit result in the Tesla rationale:

Tesla said that "Autopilot is getting better all the time, but it is not perfect and still requires the driver to remain alert. Nonetheless, when used in conjunction with driver oversight, the data is unequivocal that Autopilot reduces driver workload and results in a statistically significant improvement in safety when compared to purely manual driving."The assumption is that a driver using their Autopilot system will continue to pay attention even though he or she has minimal workload or engagement in the driving process. In fact, when an operator has too little workload or stimulation they divert attention to other actions, a behavior well known in human psychodynamics. It is observed in aircraft pilots using an autopilot, in sailors operating sea vessels, and you can see it in recreational SCUBA divers who have become reliant on dive computers to tell them when they are reaching deco levels rather than planning out a dive and being attentive to depth and duration. An autopilot system for the average driver can’t just be 95% reliable or even 99% reliable; it has to be designed to essentially replace the driver in any conditions in which it may be applied. The Tesla Autopilot system may or may not achieve that standard; the investigation of this particular accident will no doubt conclude whether the impact was avoidable, and whether the use of and failure of the Autopilot system was a contributory factor that would not have existed with a human driver.

Autonomous vehicles are in the foreseeable future, albeit not nearly as soon as enthusiasts believe; I suspect it will be at least a decade before driverless cars will be allowed to drive without restrictions on speed, conditions, or locations, and twenty years before we see wide adoption, which will likely be driven by an increasing use of fleet vehicles in urban and suburban environments. I have every confidence that they will in time become far more reliable and safe than human-operated cars, even notwithstanding accidents due to gross negligence and driving under influence, and will come to predominate road transportation, reducing injuries and fatalities by an order of magnitude or better. But there is still a way to go before the machine intelligence that operates them is sufficiently canny and capable of coping with the wide variety of extreme conditions that are faced by drivers on a regular basis.

The o.p. seems to want to be upset about something, and if not this then it will be that Tesla doesn’t offer a polka dot color scheme. The facts are immaterial in such a case, hence the hyperbole about “killer robots”. If Musk wanted to make killer robots, he’d take a Falcon 9 first stage, paint it red, and put buzz saws in place of the jet fins.

Stranger

I agree that having the government certify each version of an autonomous driver is impractical. Fortunately the law is already prepared for this using civil law. The company’s fear of lawsuit and bad press will ensure that they spend the time and money necessary to certify for themselves that the car is able to operate safely.

When a car is 100% responsible for driving than the manufacturer is liable for defects. You’ll no longer be the one who pays for insurance on the car, rather the “driver” be it google, ford, tesla, or others will be the ones needing to insure their cars and the manufacture’s insurance company will no doubt insist on rigorous testing. Could you imagine if a popular car like the Honda Accord had a defect which started causing pileups throughout the country due to a poorly tested auto-driver? Remember each driver is a clone so if one would make a mistake they all will.

I’m not. I’m pointing out Tesla’s odd phrasing, implying the driver was looking forward just like the autopilot was, and didn’t notice the giant truck.

Please do not insert words into my mouth, or in this case, projected into my keyboard. What I wrote was that state governments do not have the resources or expertise to implement good regulations, nor the authority to apply them to manufacturers at large except by permitting or prohibiting specific vehicles from being registered or operated in the state. The NHTSA does have such authority but is far behind the curve in addressing the necessary standards and verifications.

I’m sure this free market/libertarian/“Let the lawyers fight it out” mentality appeals to the Randite crowd but has no basis in reason or practice, insofar as it presumes that the threat of civil action will prevent automakers from cutting corners, concealing critical defects, or using their extensive financial resources and access to teams of lawyers from national firms to quash claims against them, whereas the opposite is very frequently seen in cases of corporate liability where the company’s self-interest and need to appease shareholders trumps any intentions to be a good corporate citizen. For the part of the civil system of jurisprudence, the legal response on delicts and torts is primarily to recompense the injured party and (secondarily) to impose punitive or exemplary damages to a defendant that has acted with egregious neglect or breach of duty, but not to look out for the general interest of the public except insofar as it may apply to the specific harms of the case, and in fact it is generally the case that damages cannot cover possible but unrealized harms or negligence.

Such reliance upon legal response also does a disservice to manufacturers who may be left with a lack of guidance as to the accepted standard of liability. With a regulatory framework in place and industry-accepted test practices and standards, a company may point to their adherence as satisfying the necessary duty of care with regard to an accident resulting from some extreme condition, such as a user modifying an autopilot and causing it to malfunction. Such a framework, if crafted with the subject matter experts and industry technical leaders can also provide guidance on the development and improvement of safety-related systems, just as it has with passenger protection, vehicle control, and restraint systems which have reduced the incidences of grave injury and death in automobile accidence by nearly an order of magnitude since the beginning of the post-WWII automotive boom.

The notions that corporations will be good stewards of the public interest, or even will look out for their own long-term financial or liability interests is given lie by any number of examples from The Ethyl Corporation to Ford Motor Company to British Petroleum. The tobacco industry as a whole deliberately defrauded the public about the known harms of smoking tobacco products, engineered ways to get around measurement machines (‘low tar’ cigarettes and filters which do absolutely nothing to reduce the polycyclic aromatic hydrocarbons implicated in a wide variety of health disorders), and paid scientists to suppress research and concoct misleading and bogus research to combat evidence of the grave harms of tobacco smoking. All of the individual lawsuits amounted to a pile of wet leaves as far as correcting their negligent and fraudulent behavior; it was only the Tobacco Master Settlement Agreement against the Big 4 “original participating manufacturers” which caused the tobacco industry to curtail practices of marketing directed at children, suppressing research about health harms, and revealed the depths of their depravity in Legacy Tobacco Documents Library, all of which the tobacco industry and individual manufacturers did and do continue to fight in court.

Stranger

It would be bad taste for Tesla to blame the dead, even though they know the brakes were never hit. So they speculate that it was a low-visibility situation as opposed to saying he had been texting or playing Angry Birds, which is frankly the more likely scenario.

Tesla’s press release, in case anyone’s interested.

Well, I was close. The radar intentionally ignores objects too far above the roadway since it may be an overhead sign. Obviously the truck is lower than a sign would be, but it may not have the resolution to distinguish between the cases. It would also need to take distance into account, since the radar is a point source and objects above it will be at different angles depending on height and distance.

I don’t see the problem. It’s an assist not an auto drive. And in the future self driving will be far safer than human driving.

I love to drive so autopilot anything with a car is not for me personally but I understand the appeal for some people. Unfortunately in a situation where someone has to die I wouldn’t want to be in the position of letting some software determine who to sacrifice, because we may not agree and there are infinite variations of that dilemma that could occur.

For the past week, my laptop suddenly refuses to go into sleep mode. If I close the lid it shuts down immediately, even though configured to sleep in power settings, and googling it yields other users with the same problem, but no fix. Will someday my Tesla suddenly and inexplicably do what it is not configured to do in traffic, regardless of the setting, and nobody will know of a fix? Why would a car computer be immune to such things?

Because you’re not running any one or several programs of unknown quality or legitimacy that are not written by the maker of the OS.

Was absolutely bound to happen. They’re going to be the test cases for liability and legality of driverless cars. Are the robots good enough yet or not? Tesla specifically says it requires Driver Oversight. How does that end up limiting the liability and/or culpability of the various parties, including the other vehicle?

And more importantly; Do the companies end up with a free ride for defects? What is their liability for fatal software errors? How badly will they get raked by ignorant juries in wrongful death suits? What will Insurance companies decide about the liability issues?

This is going to take a few decades to work out.

First of all building a single purpose computer is much easier than building a computer than one which needs to support a near limitless set of third party applications and hardware. Like how the computers currently in your car almost never fail. Or the systems of an airplane which have been using autopilot for a long time.

Second in the event of failure the care will have built in fail-safes. By this I mean that a failure will trigger passive safety systems which leave your car in a safe state. So for example if the system has a failure it will bring itself to a stop rather than say accelerating into oncoming traffic. Sure you’re upset that it shuts down but you’ll survive the experience and either you take over or wait for a tow.

Because:

Yet:

http://autoweek.com/article/car-news/white-most-popular-car-color

Doesn’t this sound like a big deal to you: that Tesla vehicles can’t deal with 20% of the vehicles around–because they are painted white?

“Can’t deal with” or “are slightly less likely to notice”? Because if these cars were slamming into every white vehicle they came across, I expect we’d have heard about it by now. And I’m pretty sure humans are less likely to see certain colors of car too. Which is what you’d expect, since colors wouldn’t exist if they didn’t have different visual properties.

I think everyone is taking the wrong message from this accident. The real problem is that semi-trailers in the US are not required to have side guards (underride guards).

The truck driver told the AP that the driver of the Tesla was driving while distracted.

http://abcnews.go.com/Technology/wireStory/driving-car-driver-died-crash-florida-40260566

This is an admittedly biased account, though, as the police reports don’t mention distraction on the Tesla driver’s part.

Brown posted a dash cam vid to youtube last month. It’s linked in this article and Brown’s comments are quoted. He was quite impressed with his Tesla car. Quite ironic that it killed him a month later.
http://electrek.co/2016/06/30/tesla-driver-dead-autopilot-crash-credited-system-for-saving-near-miss-caught-on-video/