I’m the target market for these cars – I moved to NYC from a suburb because I hate driving and never wanted to do it again. If driverless cars were available, I’d want to ride in one. But automation in aviation (including the human-computer interface it requires), while overall a lifesaver, is also responsible for some people’s deaths. I’d just like whatever company puts these things on the market to tells us about the risks as well as the benefits. Think that’ll happen?
Big companies have an incentive to avoid mentioning the risks (because it reduces sales) and to deny responsibility every time something bad happens (so they don’t have to pay).
With that said, would you feel any better if they said something like “the average driver in our products has a lower chance of dying overall but there is a possibility of being killed due to a malfunction when you would not have died had you been in control.”
That’s a realistic view of the situation. The robot car will be able to fail in ways that kill you that would not have harmed you had a human been driving, but will probably make less mistakes on average. Moreover, if you are far enough to the right on the bell curve of driver skill, you’d probably be safer driving yourself.
You’re right, of course. But I’d still feel safer if, instead of being told that you can’t make an omelette without breaking eggs, people using driverless cars were told about the odd problem and what (if anything) they can do when one arises.
When you’re in someone else’s car and they’re giving you a ride, would you prefer if the passenger seat had an emergency set of controls? How is this any different when the driver is a robot?
Airbus pilots have the option of “direct law,” no?
That assumes that they have gotten to the point of driving themselves with no one inside, like the Batmobile. That is going to take a bit longer to come to pass. We’ll need lots of experience with cars and backup drivers before we let them park by themselves. New York cars will fight over parking places, of course.
I lived a few blocks from the Long Island Expressway. No one flew there during rush hour. However 55 is also plenty fast when you are driving on the sidewalk of Storrow Drive. (I’m a former resident of Cambridge. )
Uber without the annoying drivers.
How many babies will be conceived in the back seats of robo-cars?
I am absolutely amazed how quickly self driving cars are evolving. In 2004, DARPA had a driving challenge. The farthest distance traveled on a 150 mile course was 7.5 miles. No other car made it that far.
The very next year, DAPRA had a second driving challenge, and all but one vehicle made it past that 7 1/2 mile mark. Five vehicles finished. Even more interesting, the 2005 route traversed much narrower roads, and had more turns and switchbacks on the course.
In 2007, DARPA changed their challenge to an urban environment. In this environment, the vehicles had to obey all traffic laws, avoid other vehicles driving around. This included partially getting around without a GPS. Almost all of the vehicles passed. In other words, in a mere 3 years, self driving cars went from not one managing more than 7.5 miles on a 150 mile closed course to almost all the vehicles following basic traffic laws in an urban setting.
I am truly amazed at the speed of development. I would have never predicted this. Both Google and Uber are developing self driving cars for passenger pickup. Apple is now also rumored to be in the race. Normally, I would say 20 to 40 years, but it would not surprise me if by 2025, we had self driving cars performing better than drivers.
The biggest hurdle is legal. Who is responsible and who pays for the insurance. Even though it’s very, very likely that self driving cars will be way safer than human drivers, accidents will happen, and when they do, someone has to claim the responsibility.
I don’t anticipate anyone will release a vehicle that can only survive freeway traffic. Too much potential liability and besides the driverless cars driving today are already well past that particular challenge. Any time a car would allow you to nap, meaning you can not be anticipated to take control should the need arise in any reasonable time, will create absolute liability for the car maker. They’re going to ensure they’ve got the whole package ready before they release it to the public for level 4 driving. Yes the DOT has 4 (5 really) levels of car automation:
No-Automation (Level 0): The driver is in complete and sole control of the primary vehicle controls – brake, steering, throttle, and motive power – at all times.
Function-specific Automation (Level 1): Automation at this level involves one or more specific control functions. Examples include electronic stability control or pre-charged brakes, where the vehicle automatically assists with braking to enable the driver to regain control of the vehicle or stop faster than possible by acting alone.
Combined Function Automation (Level 2): This level involves automation of at least two primary control functions designed to work in unison to relieve the driver of control of those functions. An example of combined functions enabling a Level 2 system is adaptive cruise control in combination with lane centering.
Limited Self-Driving Automation (Level 3): Vehicles at this level of automation enable the driver to cede full control of all safety-critical functions under certain traffic or environmental conditions and in those conditions to rely heavily on the vehicle to monitor for changes in those conditions requiring transition back to driver control. The driver is expected to be available for occasional control, but with sufficiently comfortable transition time. The Google car is an example of limited self-driving automation.
Full Self-Driving Automation (Level 4): The vehicle is designed to perform all safety-critical driving functions and monitor roadway conditions for an entire trip. Such a design anticipates that the driver will provide destination or navigation input, but is not expected to be available for control at any time during the trip. This includes both occupied and unoccupied vehicles.
That’ll work until Al Sharpton claims your car doesn’t pick up black passengers…
With multiple big companies all jumping into the auto-automation race, we have Yet Another pile of engineering problems to solve: Interoperability. These cars will eventually (and probably sooner rather than later) all be talking to each other, and to sensors in the roadway, and to Car Central. They will all need to be on the same wavelength.
There’s gonna hafta be Standards :eek:
What will spur adoption is two-fold:
- Insurance rates for self-driving cars will be trivial compared to owner-driving. People will move to SDC just to save money.
- Liability: at some point, when SDCs are 100 times better than human-operated cars, humans will be blamed for most, if not all, accidents involving a human driver. I can even imagine human drivers being sued by a victim just for overriding the autopilot. Think of it this way: how much worse of a driver is a DUI compared to a non-impaired driver? If a drunk driver gets into an accident with a sober driver who is at fault? Some day the difference between human-computer will match the difference between DUI-sober.
I’ve said it before: in 30-40 years your grandchildren will be amazed that you dared to get into a car with human drivers on the road!
Absolutely, but there are no shortage of people out there ready, willing and able to draw up such standards, well in advance of requiring them.
But…I guess different countries / regions might decide on different standards. Then you can eek :eek:
It would need to be licensed to the big boys like Toyota, GM, and Ford if it has any chance of being profitable and accessible to non millionaires.
Writing a general purpose operating system which works on hardware that hasn’t even been invented yet is a much different challenge than writing a single purpose control program. You ever noticed how car companies brag about the number of computers in the vehicle? That’s because each one has a single function because writing an operating system that supports one function is infinitely earlier and driverless cars will be similar. With each function being supported by it’s own operating system and then use APIs to communicate with the central control unit which will have its own operating system operating similarly to the osi model where each unit only speaks to one thing.
We already have cars that park by themselves. Right now it’s only an option on the very top end of expensive cars, but they’re out there, on the road, and legal.
Well, to be honest, yes. However, usually it’s only tangentially related to safety. Other drivers drive too slow - and do it in the wrong lane, follow too close, brake too late, stop too close to cars at stop lights, let themselves get involved in dangerous packs of cars, and can’t follow a good apex through a corner to save their lives. These days, I only feel truly calm riding with my wife and my sister-in-law. I’d be rudely grabbing the controls with everyone else if they were available.
Now, when it comes to self-driving vehicles: If I got into a wreck due to a bug that could have been temporarily worked around with me manipulating about $1000 worth of steering wheel and pedals in an $80K or so car (rough number, it’s bound to be a fraction of the cost of the self-driving system for a good long time) vs. just sitting helplessly in horror while the carnage happens - I’d want to punch (ok, verbally lash for a week or so) the engineer that thought it was a good idea to remove them. This might change at some point in the future where people don’t learn how to drive in order to own a car, which would remove the workaround from the practical realm, but that’s not realistically available at any time in the foreseeable future.
The self-parking cars that I know of require the driver to select the space from a screen, and then the car moves itself into the space. I believe that Voyager is meaning a car that can park itself autonomously after dropping you at your destination, which is an entirely different machine.
And finally, thank you, drewder, I wasn’t aware that there were formal DOT levels of assistance. I think very nice level 2 cars are our immediate future, and without some way to conquer the need to pre-map the roads and maintain that map reliably, level 3 will be a niche car until then. Level 4 will require strong AI, I think. So, that’s fantasy until we have a major breakthrough.
Don’t get me wrong, I would love to be able to play the banjo all the way to work, or romance my wife on the way to dinner and make love to her on the way back (hey, anything to get my mojo working on her). I just think that we have some very hard, high level computer science, behavioral psychology, neuro science and cognitive science to do before it’s even a possibility, much less a practical package to put in a 4-seat car.
I am amazed at the amount of development that was achieved, but 2 things:
-
I was initially amazed at how bad the initial DARPA challenge cars were. I had fooled with computer vision before the challenge was run, and I had an amazingly hard time trying to solve field/ground relationships in real time with moving images. I had hoped the teams had better solutions, but it did not seem so.
-
Even though the cars did much better in later challenges, they didn’t deal with driving conditions that approached anywhere near real-life, and there are still problems with any car that’s having to solve the figure/ground problem in real time.
Now, a general-purpose (sans pre-scanned map) DOT Level 3 car could probably get away without really strong AI if we could solve the computer vision problem through something like an ASIP that did all of the visual processing in hardware, but we will have to come up with a pretty specific solution to that problem before we can encode it in a specialized instruction set. I can see us doing that before we have a general theory of cognition that we can build a practical strong AI around, but it will still require a lot of work.
The army of black-hat hackers would be scary.
The penalty for hacking a car would have to be a multiple death sentence — strangulation, revival, strangulation, revival, strangulation, revival, ending with the hacker’s head on a pike after he’s fed alive up to his neck through a slow wood chipper with dull blades.
Not without someone in the car. The cars will park themselves first thing, but that’s different from sending one off to find a parking space after dropping you off.