I don’t get what the legal problem is. If the design of the car causes avoidable deaths, someone (or everyone) in the manufacturing process will get held liable, and their insurance will pay a settlement or a judgment. That’s the way it works now, and I don’t see why self-driving machinery would work any differently?
If I hit someone with my car, I don’t get credit for all the people I missed.
The manufacturer of the driverless process, though, arguably gets “credit” by car companies buying its product.
The manufacturer already shares in the liability and theoretically any of the extant computer systems could cause an accident in very rare cases, just like an accelerator can “get stuck.” Of course they’ll be prepared, with insurance or at least some reserve fund, to deal with the lawsuit. And of course they will pass that cost on in the price of the car.
Meanwhile the fact that you are driving a vehicle that is statistically speaking ten times less likely to get into an accident and which can and will network any times you take over control (and are thus taking on personal liability) will result in your premium being a small fraction of what it otherwise would be, more than offsetting that passsed down cost.
The alternative model is that the insurers are well motivated to have as many of these cars on the road as possible and may be willing as an industry to set up a guideline that the vehicle holds the liability as long as their acturial analysis means that they can still price the insurance product at a higher profit even while at lower cost. (Ten times fewer accidents overall is what matters to them so long as they sell the insurance product for only 1/5 the price …)
Opponents may be underestimating the power of the developing V2V tools which will be available in our self-driving cars of the (near) future. Sending and receiving signals from other vehicles within a certain range will give your car’s computer reams of useful information that it can use to maintain ultrasafe transport. If your, or someone else’s, computer crashes, the electronic word will also quickly spread to those around you that a rogue is loose so that other cars can avoid it or even point fingers at it and say mean things.
The car makes should have some type of insurance policy offered, at the time of vehicle purchase, related to some entity in the state. The people who purchase cars from the same make or state will benefit each other, by existing within the same pool. The safer vehicles will then have the lower rates and the poor makes will improve or go away. Also, introduce legal protection and cap damages. There isn’t any doubt in my mind that there will be a ton of data-logging, so this should be included with any type of regulation.
As much concern as there is behind autonomous vehicles, it’s really hard to make a case for human drivers (especially new ones, which won’t be the same problem for computers), and yet insurance companies still make it work. I’m sure something can be figured out.
The tricky part about any type of risks, though, is that we have to divorce ourselves from the subjective points, and really break it down to the ugly game of objective numbers or percentages. If the percentage of allowable crashes falls far below a national average, or we unclog many of our major highways through more efficient traffic patterns, it actually makes sense to let computers do the routine driving. However, if you or a family member happen to fall within that percentage of computer-caused crashes, a shit-storm will undoubtedly present itself-- no real ethical way to rationalize that, which is exactly what would make it a court fiasco, without protections.
That said, it’s also going to be a very very different driving experience. Even using something like adaptive cruise control is very weird and even unsettling, at first. You constantly want to nanny the thing. It will take a long period of adjustment, for people to adopt any fully automated tech.
[QUOTE=JRDelirious]
So, we won’t use MS, or even Apple. Rather something more resembling of what’s on a late-model Boeing or Airbus, that have not shown much propensity for doing unannounced Immelmann turns when you just want to enter the holding pattern.
[/QUOTE]
If you want to research this on your own, look up RTOS or Real-Time Operating Systems. These are designed for mission-critical uses like controlling power plants and vehicles where pausing for two minutes while the thing reboots is unacceptable.
As an example, much of the code that runs current Airbus planes like the A350, and probably the A380, is written in C and runs on TTPOS or PikeOS. These OSes generally are based on a *nix microcode such as POSIX or QNX rather than a full distro.
Like cars, airplanes have multiple computers, so a failure of the galley power management won’t cause a loss of cabin pressure, for example. They may not be able to make coffee, but your eardrums will be intact.
These individual computer or controller systems are sourced from multiple vendors, so they may each be designed on a different OS and programming language, depending on purpose and what the maker prefers. They just need to be able to communicate with each other. In a car, that would be CAN Bus. (Officially, it’s the Controller Area Network, and occasionally mis-interpreted as Car Area Network.)
I’m not sure where you got this understanding. People are working on self driving cars right now. I’ve never heard fear of lawsuits as the reason the technology isn’t ready yet. And, just so you know, fear of lawsuits does promote safety in many cases. It’s not a bug, it’s a feature.
I bet they work out the legal problems with self-driving cars well before the spring of 2023, when the Firebug will reach the age when he can get his learner’s permit.
I think the chances of their working out the technical problems by then are pretty damned good too.
The obvious solution is to simply pass a law protecting designers of the car code (and all other potential defendants) from negligence claims so long as the aggregate crash totals for the car remain some fraction of the wider aggregate. As more SDCs are adopted, the law will be amended to change that number, so that eventually code that is more negligent that the alternative code out there can indeed lead to liability.
Negligence laws aren’t some mystical power that cannot be changed. They are just common law, and are changed by statutes all the time.
To go along with this, you’ll eventually have a situation where people who drive themselves pay more in insurance than those with cars with autopilot.
Then you’ll have about 5-10 years worth of news stories and debates about how the insurance industry discriminates against the poor because poor people can only afford older cars w/o autopilot, therefore their insurance premiums are higher. You’ll also have debates about whether school buses should be equipped with autopilot, and etc.
Then what’s really going to happen is a shitload of people will be out of a job: truckers, drivers, cabbies, more. Why pay a cabbie when the car can drive itself back to the airport for another fare?
Then the number of vehicle sales will drop a bit as families realize that, now, they can actually have fewer cars. For example, I go to work at 6:30am. The car takes me to work, drops me off, goes back to get my wife and daughter and takes them to school. Bringing my wife back, she has the car to use all day while I’m in the office. She might go and pick Sophia up… or she just might send the car, empty, to do so. Later, around 6pm or so, I’m ready to be picked up. Voila! 5 days a week and we only need one car. (Currently, we need two cars so one of them can just sit in the office parking lot all day.)
The unemployment issues alone will dwarf the legal question of product liability. What does it matter to Yellow Cab if their insurance premiums go up by 20% if their payroll goes down by 50%? They’ll take that deal any day of the week and twice on Sunday.
Funny, but that humor relies upon the driver/owner being totally ignorant of what a car is or how to drive it. Not a good analogy to a highly experienced computer user confronted with a new model.
An analogy might be an experienced driver with 200K miles under his belt getting in his new car and not finding a steering wheel, ignition switch or pedals, then finding on page 340 of the manual, in fine print, that it doesn’t use any of these, and he needs to buy a newer model of his iphone to control the car. Meanwhile, he is late for an appointment, but his phone doesn’t work inside the car and he has to step out to make a call.
Yep. My first thought, and what I was going to say, but Shakes has stated it very well.
If liability suits from the TotallyRocks are rare (because they’re ten times safer than human drivers), but are huge when they happen, it will automatically be reflected in insurance premiums.
I predict it will be cheaper to rent a self-driving car than a car you have to operate soon after they are commercially available - so much so that most rental companies will have only fleets of cars that drive themselves within 3 years of widespread availability.
It’s not an analogy to an experienced computer user with a new model. It’s an analogy to not-so-experienced users who refuse to read the manual, who think computers should be magically intuitive, and get mad if you suggest they might need to learn to use them.
I think if we get to the point where cars are completely autonomous, and are navigating the roads with no people in them at all, there’s going to be no reason for people to own cars at all. Cars would be like omnipresent public buses that would be used like taxis. Need to go to the grocery store? Get on your smartphone and order a car, it drives itself to where you are, you get in and go where you need, and then pay for the fare. There would be little reason for a person to buy a car that probably sits unused for 95% of the time it is owned. What a waste of money.
On the plus side, there would be zero issues with parking, since it simply wouldn’t be needed.
ETA: plus, all the people who had been driving taxicabs or buses would eventually be employed in more productive work than moving things from point A to B.
In the past, congress has passed legislation to protect specific industries from lawsuits. I see no reason why they couldn’t do so again in this case. See the “Protection of Lawful Commerce in Arms Act” and the more limited “General Aviation Revitalization Act” for examples.
I think that joke sheet might actually have been more realistic years ago when it was new.
Time was, before “personal computers”, that computers (what we now call “mainframes”) really were big and complex, like any other piece of big complex machinery, and it took actual training and technical expertise to use them (either as programmers or computer operators). To the hoi polloi, they were big mysterious machines, full of blinkenlights, that existed at big banks, universities, and government places. If you knew how to use computers, other people were in awe of you. (Oh, for the good old days.)
When personal computers first came out, they were fairly simple (the Apple II).
I think Microsoft made a big leap in making computers really for the masses, but as their systems got more complicated – and especially when Winders came out – they really put their big emphasis on making it a mass-market commodity for the illiterate (computer-illiterate) masses, and they really tried – not as successfully as one would wish – to keep it all dumbed down. This has been one of the sources for so much discontent, instead of satisfaction, with all things Microsoft.
In a world in which one product (the autonomous vehicle) is clearly safer than the alternative (as this hypothetical sets up) a manufacturer has some exposure merely by selling an inferior product. More exposure than that which exists from an accident that occurs with a superior one.
Odds are that once such a superior safety profile is established it will, like seatbelts and airbags, become both standard and required by law.
Yes, true autonomous vehicles make car sharing much more attractive. Just schedule the car that meets your needs of the moment to be there to pick you up when needed or call for one on an app. Cheaper if you are willing to carpool …
It’s actually more realistic now. In the 80s, most people understood that computers are a powerful tool that requires some learning. The “militantly ignorant” were a small minority. Today, the myth of “intuitive computers”, because of the attempts to dumb things down, is pushed so hard that people think if there’s not a big friendly button in the middle of the screen it’s too technical.
If I had a dollar for every “digital native” student who came in telling me that “My computer doesn’t have Google. Can you put it on for me?” or having no idea where they saved a document I could retire early.
And it’s not just computers. I see this attitude with cameras, camcorders, musical instruments, even cooking tools. The attitude seems to be “if buying a tool doesn’t make me an instant expert, there’s something wrong with the tool.”