I see a lot of talk in this thread about “places where driverless cars won’t work”. I thought the idea was to use the driverless feature only in areas (most likely freeways, at least at first) where the robotic car could operate safely and effectively, and then have the car slow to a stop with a human driver taking over for all other portions of the trip.
Why worry about mapping every last driveway and timber-road? That unpredictable stuff is what a human driver is for. Which is why I think the prototypes of auto-drive cars without steering wheels are just silly.
I’m glad you brought that up. Wouldn’t our train system be a good place to test out “driver-less driving”? Those things run on rails with limited traffic… and they still get in wrecks! Maybe we should try to have trains run without human operators first.
Well, if you need a driver, it removes a great deal of the benefit of driverless cars. Also, I don’t think anyone is seriously intending to make a car that would drive via GPS alone, you need to have the computer actively avoiding other cars, obstacles, things such as that.
Google isn’t the only company working on self driving cars, they’re just the most public about what they’ve accomplished so far. The big automakers favor introducing smart driving components piecemeal into the market, and have already begun to do so. The self parking, and automated braking systems are the first step and are already in production. Up soon will be active steering that prevents a driver from driving into an obstacle, such as a car in a blind spot or off the road at high speeds should the driver not be paying attention. From there it’s not too big a leap to have a car take control during a skid and auto correct so the driver never loses control (this would be what would happen in the case speculated by the OP). Features that won’t be popular but might be mandated will be cars that can recognize speed limit signs to prevent drivers from exceeding it, to automatically stopping for stop signs and red lights.
The advantage to this approach is that cars take more and more of an active role in preventing drivers from making mistakes, but still have the driver making all of the decisions on where to go. By the time fully self driving cars come on the market it will just be the next logical step of smart components we’ll all be used to, and with the vast majority of the other cars on the road having smart technology it won’t be such a mad house as some of the posters above predict.
Frazzled, I don’t know if you group me in with the people projecting a mad house, but I agree that if self driving cars are going to happen smoothly, it will be a slow, progressive change*. The first cars that I can take a nap in while commuting will probably happen around 2030, just in time for my soon to be elderly self.**
However, even if it is progressive and safe through the amount of control we give to the car - I do predict that there will be a new, novel problem exhibited by the AI at some point after it’s become both the short-term pilot and the navigator of the vehicle. I honestly don’t think that autonomous cars would cause a massive accident, but I can imagine a traffic jam caused by an early version of the AI collectively not being able to handle an unforeseen situation, or by emergent behavior caused by their aggregate bugs (I believe there will always be bugs as long as the software is written by humans).
But, when we progress from autonomous cars and their unintended emergent behavior to cars that are getting instructions or orders from other cars as to controlling the vehicle - I think all bets are off, and you can put my opinion in the “total madhouse” column. Either someone will introduce something that causes a problem, or someone will find a way to abuse it. If the network software is developed by multiple sources, the likelihood of problems increases.
*Fox became a hardcore porn channel so gradually I didn’t even notice.
**But then again, I’m hoping for a robot body to be available around this time, too. You know, with missile launchers and stuff. I’m an optimist. (And of course, I could be wrong and it could happen faster. 2022, maybe?)
5% of drivers needing to be in control of their car for 5% of their trip doesn’t diminish the value of the car very much at all.
I haven’t seen any proposals for cars controlling other cars. I have seen proposals for cars exchanging information, which will happen before driverless cars are in heavy use. Cars controlling others would be a very bad idea.
The biggest real problem I see is dealing with the idiots who set their car password to password and get hacked from China. Given the number of idiots who have had their internal webcams hacked, it will happen.
Which, in theory, ought to reduce a lot of speeding. I get in my car with my unfinished report, a book, breakfast, some knitting, whatever, and spend my commute/travel time doing something else or taking a nap. That 95% of the trip will be much less stressful, filled with alternative activities.
The inter-vehicle communication would not be a primary system, the cars would rely on sensors to verify that the other cars are doing what they say they are doing. And replacing large intersections with roundabouts would make all traffic flow better, even without automation.
No, the cars will all be equipped with biometrics, instead of a key, the seat will simply recognize your ass.
Good point. Maybe we can start the workday when you get in the car to go to work, and thus get to go home later. I’ll be way retired before it happens, though.
Definitely not to help drive, since the car better be able to drive when no other cars are around. My understanding is that the cars will exchange intentions, so that when two cars don’t try to change lanes into the same spot, and a car having to brake rapidly will tell cars behind it what it is doing. Like I said, we’ll see this long before driverless cars. I seem to recall the announcing of a standards effort around it.
That would make lending your car interesting.
“Bill, moon this panel please. Then you can borrow the car.”
Having any part of the trip require a driver makes it very difficult to sit back and relax. If the car can’t drive everywhere a human can, I don’t really think you can reliably predict where it can’t drive.
Perhaps my car is hacked, maybe it is broken, or perhaps I’m a jerk that’s hacked up his own car control system. Through any of these methods it’s going to possible that my car is sending incorrect information to other cars, telling them that I’m changing lanes, hitting the brakes or whatnot when the control system is actually doin no such thing. The other cars around me are going to either be expected to react to these inputs, or they’re going to have to depend on sensors telling them about what’s happening exclusively. If it’s the latter, why have the signaling network at all?
“The only secure system is one that is unplugged, turned off, and in a locked room.” I obviously can’t prove this ancient statement, but it sure seems to be true more often than not.
95% of the trips are going to be the same every day. Even trips with segments that need a driver will be the same every day. Sure, cars that boldly go where no driverless car has gone before are going to need more attention. But that will be a tiny fraction of trips. And how to tell? Where the neon turns to wood of course.
Because the network can look ahead to places where the sensors can’t look. With multiple sources of input, each car can use voting to figure out which input to believe - standard fault tolerant technique. And if the sensors and the signals disagree, the car would shut down signaling - that’s necessary in case of error as well as hacking.
There are well known techniques for dealing with this. For instance, the Byzantine Generals Problem seems very appropriate. I haven’t read any literature on fault tolerance for cars, but I’d bet that researchers in it are well aware of these methods.
And the only car that is safe is the one parked in the driveway. And not even that one. When I was in grad school my car, which was parked off the street, got hit when I was 800 miles away.
The question is not whether the system will be foolproof, but whether it will be more foolproof than what we have now. That’s not all that hard to do.
In terms of 5% of the trip, I agree with scabpicker that actually it would diminish the value of the driverless car a lot. The specific percentage of time that a human needs to be in control is irrelevant, beyond whether it is 100%, 0% or something inbetween.
0% is needed for applications such as driverless taxis, using cars for delivery of goods (i.e. no humans inside at all) and, of course, the killer app: private transportation while drunk.
Of course, we’re not going to arrive at that kind of car in one jump. But that doesn’t diminish the correctness of what scabpicker said.
On the 5% of drivers thing, sure if that was the case it doesn’t diminish the value of the driverless car for most people at all, so it wouldn’t really affect uptake.
I can’t think of a specific reason why this would happen though and/or how we could reliably identify what scenarios are in this 5%.
The one scenario given so far – that not all roads are mapped – is not very convincing because I think mapping the remaining 5% of roads would be a comparatively small, essentially one-off, cost that would be likely to happen in the early days of driverless car acceptance.
The Byzantine Generals problem really addresses consensus on a network, the problem I’m presenting is one of trust on a network. No other cars besides mine knows when it’s going to hit the brakes, turn or speed up, so there’s no consensus to be had. I don’t have any experience with ad-hoc trust, but I do have experience with some of our current trust models on the internet. SSL and SSH were theoretically very secure, and you could trust them until someone had put out a buggy or predictable deployment. They’re not even close to the level of uncertainty that you would have with an ad-hoc network of cars.
Also, with the practical solutions to the Byzantine Generals problem that I understand, the result is to discard the work of the nodes that fail the check (e.g. BitCoin). I don’t think that’s a practical solution here. Even if less than 1/3 of the cars don’t coordinate, that could stop the action of the other 2/3 if the first <1/3 are placed to block the possible solutions.
I can’t find any info on ad-hoc network trust that isn’t a research paper with a cost to read, so I don’t know if I can hold down my end of an argument about this theoretical network. But I think that is what would be needed for a system of networked cars to be useful.
Well, we would presume the system wouldn’t hit parked cars, at least.
I agree that the system does only have to be better than our current one for it to be eventually adopted through attrition of cars. I also agree that actual consumer vehicles will probably slowly evolve from assisting to eventually navigating. I don’t think that a car that can’t navigate more than 95% of my commute without my interaction is going to be called self-driving.* I also don’t think that a networked car that’s taking inputs on how to drive is necessary or desirable in a self-driving car.
*What’s it going to do when I refuse to put down the banjo and drive when we’re at the edge of it’s ability?
Now, one way around the mapping problem might be to make the self-driving car a self-mapping car. Once you’ve taken it over a route a certain number of times, it validates/maps the route, and eventually it knows it. So after a time, for certain routes you could get in, say “Home, James!”*, it would essentially respond “ok, I know that route” and head off.
If all the self driving/mapping cars stored/validated this information at a central server, with reasonable decay times for the information (if no one’s been down that route too long, consider it invalid), you’d have a set of data which would possibly allow the self driving cars to do most people’s commute, and eventually 95% of a city. It’s not 95% of where I could drive, but it would cover a lot of what one does drive.
Now, the problems I would see with this scheme are privacy and price. I can imagine lots of places that these cars might be driven that the owners would want to eventually be self-driven to, that they wouldn’t want a public map made of. Lord knows that privacy management’s already a problem, and this would exacerbate it.
*Sorry, my dad would have loved to have been able to tell his car that. Couldn’t resist.
I think it is unlikely that a self-driving car would reach its own limits. If it can get you almost there, it most like can get you there. Even unmapped gravel roads will not be beyond its ability, down to the point where the route cannot be readily discerned from the landscape (and car sensors will be better than you much of the time, especially at night).
Again, the network is an additional aid to navigation and avoidance, not a primary instructor. Kind of like how you want turn signals, and you want them to mean something, but when behavior is contrary to them, you cope.
As far as “hacking”, by the time self-driving cars are prevalent, the operation code will be well-vetted, to the point that it is hard-coded/hardwired, changing a car’s driving program will simply not be possible, unless you can break into it and put in a bad ROM. Hacking might involve breaching TomTom or whatever and messing with the server’s routing information, but it will not result in cars plunging off downed bridges or driving into buildings, because self-driving cars will have to be smart enough to identify these dangers and avoid them. Traffic might get a bit bollixed up, but hell, it does anyway.
I’ve seen it applied to cases where some of the components are untrusted due to failures. However, all voting systems fail when too many components fail. I have a hard time seeing how hackers can transmit consistently incorrect messages as opposed to chaotic messages. Each car had better have a means of transferring control back to the driver if it self tests itself and fails. I suspect early driverless cars will piss off their drivers by doing this a lot - but better to be safe.
I admit I pulled 95% out of my ass. Forest rangers who drive might never be able to use it. Any reasonable car would be able to handle 100% of my commute, and probably 100% of the commutes of people who work in my building. Hell, when I drive from the Bay Area to Disneyland any decent car would be able to handle 100% of that trip, unless you need to take over to pull into rest stops and stop for gas and lunch.
It is just like electric cars. The poor sods who have 60 mile commutes are not customers for them. Those who have ten mile commutes are. And there is plenty of a market in that demographic.
Drunks are a problem, but the areas where the cars won’t work are probably areas where a drunk driver can’t do too much damage. We’d still have collision avoidance systems working, even when there is a human driver.
Very true. Hell, I text my wife before I leave for home every night, and my phone has learned this so I can do it by tapping the messaging screen twice.
We have no privacy already. Deal with it. If we go over a toll bridge or drive on a pay express lane with our FastTrak they know where we are. The Golden Gate bridge no longer takes cash - every car crossing it has its license plate photographed. Cops cruising with license plate readers may pick up our location and various times. I suspect cars exchanging information will also exchange IDs.
And Progressive gives you a discount if you let them spy on you.
But our smartphones will know where we are at all times, so our cars are minor.
The transition to self driving cars will be made through the development of co-piloting. Every year more of these features enter the high end market and filter down to lower cost vehicles. ABS, traction control, assisted steering, assisted breaking, blind spot detection, navigation, heads up displays, multiple video cameras and automatic parallel parking are progressing from options to being standard - just like happened with heaters, radios, A/C and automatic transmissions.
These are aids to driving that can be considered co-piloting. As the system becomes more integrated and the software hardens the human will become the co-pilot to the computer driver.
The self driving car is not a vehicle that allows you to nap in the back seat as it takes you to the airport. It is a system that allows the car to optimally function in the urban and suburban traffic environment.
This was initially defined by the Intelligent Vehicle Highway System program in the mid 1990s.