For what it’s worth, blaming the truck may be appropriate in the accident location, but in Michigan, the Tesla had a duty to yield. So, yeah, Tesla needs to work on that.
Yes, autopilot only needs to work properly when everyone else is driving safely.
If you just smash into the side of the semi you’ll be A-OK.
From the linked article: "The company emphasized the unusual nature of the crash and said it was the first fatality in more than 130 million miles of use.
And in other news, 99.9% of today’s commercial flights landed safely. Do not pay attention to the near misses.
"A host of subsequent videos posted by Tesla drivers on YouTube showed near-misses on the road with Autopilot, prompting Musk to say he might curb the function to minimize the possibility of people doing “crazy things.”
Maybe it should not be called “Autopilot” (suggesting it can be used as such), and instead marketed as “Helpful Driver Assist Thing”.
"Tesla said “the high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances of the impact caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S.”
Not sure how “extremely rare” such a collision is. Gotta love the weird obfuscating press release language though.
Wait, this happened almost 2 months ago?! Why are we just hearing about this now?
So one person gets killed, and you advocate banning the cars outright instead of investigating them?
Great plan.
Meanwhile, a person is killed every 25 seconds in a traffic accident.
Self-driving cars are unlikely to ever become 100% crash proof, but the bar for “safer than a human driver” is pretty low.
A team of well-trained capuchin monkeys could probably exceed that standard. People are such incredibly shitty and inattentive drivers it is surprising that there aren’t more accidents. Autonomous driving systems will certainly improve on human drivers once they reach a level of technical maturity and reliability just by virtue of being able to handle a higher workload, process a wide array of multisensory data in all directions around the vehicle, and not suffering from distraction, inattentiveness, or reduced cognitive and sensory capacity from fatigue, illness, or intoxication.
But again, a system labeled “Autopilot” should, in fact, be capable of driving the car in normal circumstances and alerting the driver or safing out when experiencing a potentially hazardous situation. A system which takes over most of the driving functions but still requires the driver to be attentive is an inherent contradiction. If a system exists merely to assist the driver (e.g. warning of hazards during lane changes and erratic movement of other vehicles, avoiding road hazards, providing enhanced tarction control and recovery, et cetera) it needs to be integrated into the driver’s workflow rather than just taking over or expecting the driver to be aware of when circumstances exceed the system’s capability for response. This is where objective standards and test verification methods come in to assure that the system assures the necessary minimum level of safety, reliability, and human-machine interaction workload acceptability, just as we do for other safety-related systems on automobiles, and as is done for commerical and military aircraft.
Stranger
No, I advocate regulation of the process. There’s no need for an investigation in this case, we know the cause of this accident, no one was driving the car. It’s absurd that these cars have been put on the road. They haven’t been tested under any sort of real world conditions, and we can’t trust the manufacturers to do that. The one thing that will stop the development of self driving cars in the near future is applying the standards used for roll-outs of common and rather harmless technology to something as dangerous as a car.
It would be interesting to see how the self-drive Tesla would fare in a country that has a significantly different road fatality rate tha the USA.
Here are the fatality rates for the worst countries, in deaths per billion km:
Brazil 55.9
South Korea 18.2
Czech Republic 13.9
Malaysia 12.6
Then comes a cluster of seven countries with rates in the 6-8 range, including the USA.
It would also interesting to see how Tesla would fare in Guinea-Conakry, where in any given year, 10% of all vehicles are involved in at least one fatal accident. (Staggering, isn’t it?) Or in Libya, where a person is seven times more likely to be killed in a traffic accident than in the USA.
Yes, I saw that Tesla was very quick to point this out, but of course autopilot can essentially only be used on highways with good visibility and clear lane markers. In other words, the safest of conditions. Every city has a list of the “10 most dangerous intersections”, and in 100% of cases where a car goes through those intersections a human is in control, and in 0% of cases Tesla’s autopilot is. It’s just not a fair comparison.
That isn’t to say that Tesla’s wrong about autopilot being safer, or that they won’t get there eventually (which seems inevitable), but I think they need to compare like for like.
In a place like Hanoi or Colombo, a Tesla would simply never pull away from the curb. There is always perpetual oncoming traffic everywhere, mostly motorbikes, and the only way to get anywhere is to just creep out into the traffic, with the expectation that other vehicles see you coming, anticipate your moves, and avoid you.
You want the process (presumably of developing and deploying self-driving cars) regulated but you don’t want an investigation when there is a failure? And I’d like to see a cite that they haven’t been tested.

It’s also worth noting that these aren’t like the Google completely autonomous vehicles. The “autopilot” in a Tesla is an assistive technology to a real driver: a way to keep you from having to manually stay in your lane and at appropriate distances from the cars about you. It is not, and is not meant to be, the kind of thing that you can take your eyes off the road while using.
I wanted to repost this, because I think many people are misunderstanding the technology used in this instance. The Tesla is NOT a self-driving car, like Google is experimenting with. The technology used in this Tesla is a midpoint compromise, and the driver is supposed to pay full attention and even keep a hand on the wheel at all times. The owner’s manual apparently has these warnings in it:
One of the warning reads “Do not depend on Traffic-Aware Cruise Control to adequately and appropriately slow down Model S. Always watch the road in front of you and stay prepared to brake at all times. Traffic-Aware Cruise Control does not eliminate the need to apply the brakes as needed, even at slow speeds.”
Another warning says, “Traffic-Aware Cruise Control cannot detect all objects and may not detect a stationary vehicle or other object in the lane of travel. There may be situations in which Traffic-Aware Cruise Control does not detect a vehicle, bicycle, or pedestrian. Depending on Traffic-Aware Cruise Control to avoid a collision can result in serious injury or death.”
Car&Driver tested several “semi-autonomous” cars, and even though the Tesla came out on top, the magazine reported that the Tesla left its lane 29 times in a 50-mile drive. That’s half as many times as then next best vehicle, but still a lot. This technology is a toy, IMO. Useful, perhaps, but not something to be relied upon with your life.
Self-driving cars also seem to have a serious hard time figuring out where lanes are when it rains hard. Most people do too, but we use other clues and we make mistakes.

Meanwhile, a person is killed every 25 seconds in a traffic accident.
Self-driving cars are unlikely to ever become 100% crash proof, but the bar for “safer than a human driver” is pretty low.
Until the Singularity decides that careful management of automobile accidents is the most efficient means of Human population control that it can use without revealing its existence.
*“Bob, there’s some strange patterns in these traffic fatalities. This month in Boston, 87 people over the age of 70 died in accidents, some of which look extremely unusual. During the same month in Cleveland, 14 people died at a single intersection outside a maternity hospital.”
“Oh come on, Jim. You’re just looking for patterns in random events. Those cars are all different makes, so it can’t be an issue with the software!”*
my laptop suddenly refuses … Why would a car computer be immune to such things?

Because you’re not running any programs of unknown quality that are not written by the maker of the OS.
Tesla can do over-the-air updates
Over the air updates means over-the-air access…which means HACKERS.
So far, there are no viruses for Tesla, just like there aren’t many viruses for Macs…because there aren’t many users.
But when eventually there are a couple million robo-cars on the road, there will be thousands of problems. New, previously un-programmed issues* will arise, and require updates almost every week.
That’s fertile ground for viruses, hackers and criminals.
What will you do when your car suddenly stops on the highway, and displays a message like this: ?
" Pay $1000 in bitcoins to this Russian email address, and we’ll send you the password to let you re-start your engine"
*issues such as this incident with the white truck, or , say, variations in state laws,as mentioned in ibalthisar’s post above

You want the process (presumably of developing and deploying self-driving cars) regulated but you don’t want an investigation when there is a failure? And I’d like to see a cite that they haven’t been tested.
Investigation of what? A car with no driver killed someone. What significant details are you missing?
And I’d like to see a cite that I said they haven’t been tested.
Well, you did say, “They haven’t been tested under any sort of real world conditions”. What did you mean by that?

Over the air updates means over-the-air access…which means HACKERS.
So far, there are no viruses for Tesla, just like there aren’t many viruses for Macs…because there aren’t many users.
But when eventually there are a couple million robo-cars on the road, there will be thousands of problems. New, previously un-programmed issues* will arise, and require updates almost every week.
That’s fertile ground for viruses, hackers and criminals.What will you do when your car suddenly stops on the highway, and displays a message like this: ?
" Pay $1000 in bitcoins to this Russian email address, and we’ll send you the password to let you re-start your engine"*issues such as this incident with the white truck, or , say, variations in state laws,as mentioned in ibalthisar’s post above
From what I understand, the car isn’t on the internet, it’s not like anyone could access it. It would be a lot harder to hack than it would the average computer. If I’m driving around in a Tesla, I don’t think it’s very likely I could drive by someone and get hacked by them. If I’ve parked the Tesla somewhere, someone could hook something up and download a virus and mess with the car’s computer, but they could also just cut the brakes or otherwise mess with the engine, so unless I’ve angered the mob or something I’m not hugely worried about any of that.

Investigation of what? A car with no driver killed someone. What significant details are you missing?
And I’d like to see a cite that I said they haven’t been tested.
I’d assume an investigation of why the car didn’t stop when the truck pulled across. Was the sensor not working? Was it working but not pointed at the right area? Do there need to be more sensors, or those sensors have bigger fields? Why didn’t the radar warn the driver and computer? Is this something that could happen again or is it a unique freak accident that would only happen in this specific set of circumstances? The investigation could show if there are some minor tweaks that need to be made, or if the whole system is bad and all the drivers before have just been lucky and more attentive.
As for what to investigate, how about whether the accident was the fault of the autopilot. It’s possible that the truck turned so quickly in front of the car that a human driver would have been similarly unable to stop in time. And if the car’s sensors didn’t detect the truck, why is that? How do the sensors need to be modified to be able to sense a truck under similar circumstances? What was the driver doing that he was unable to stop in time? (Admittedly, we’re probably never going to know the answer to that.)

Over the air updates means over-the-air access…which means HACKERS.
OTA updates are “easy” compared to most other security problems, and almost everyone gets them right. You very rarely see problems with hackers somehow spoofing updates such that devices think they’re getting the real thing. It’s difficult enough even with physical access to the machine.
The vast majority of security holes come from running general-purpose software on computers. It is very difficult to close off all possible avenues for attack. But the Tesla doesn’t load general-purpose software (like a desktop PC or phone), so it’s not prone to these attacks.

From what I understand, the car isn’t on the internet, it’s not like anyone could access it.
It actually has an always-on 4G internet connection, so it’s not impossible. But as I said, almost everyone gets this right. It’s the fact that they don’t run arbitrary programs that keeps it safe.
Tesla runs hacking challenges and the general view is that they have very good security practices. No one has come close to any kind of wireless hacking. The only security holes found so far have required substantial physical access, like wiring into the Ethernet system. As you say, with physical access they can also just cut the brake lines.