It seems very dangerous to expect people to sit attentively behind the steering wheel of a driver-less car. People quickly get bored and stop concentrating. They will quickly find other things to do. Read, sleep, watch videos on their phone, fill out paperwork, almost anything except stare at the monotonous road.
You can’t be a little bit pregnant and you can’t put driver-less cars on the road expecting very attentive people to watch the road. The commitment to using a driver-less car needs to be 100%. Either we trust them or we don’t.
These cars won’t have perfect driving records. Accidents will happen, but it seems likely driver-less cars’ accident rates will be much, much lower than human drivers.
Passengers are quite capable of paying attention to what’s happening now. Theres no way we will move straight to driverless cars, self driving cars will be first. It is only an extension of the cruise control and lane monitoring that already happens now.
Airliners should have highly trained pilots because they are responsible for a lot of people who are paying money to be there. Also because flying is more complicated than driving and necessarily requires more training.
Private pilots are not highly trained and don’t get paid, yet they still have access to highly sophisticated automation if they want and can afford it. We still allow them, require them in fact, to be able to fly manually.
Your average car driver IS an airline pilot, train driver, plumber, doctor, etc. We are all just people. There’s nothing special about an airline pilot other than wanting to fly aeroplanes and having a couple of lucky breaks. In my experience your average airline pilot is of average intelligence and can fly an aeroplane as well as an average person can drive a car.
I’m not necessarily talking about life or death emergencies, just situations the car can’t cope with. Say it gets on to a road that, for whatever reason, doesn’t meet the required parameters and the car stops and says “all yours mate”. You then keep driving. Or lets say the regulations require two sets of sensors for self driving mode. One set fails and it can still physically drive itself but legally isn’t allowed to. It pulls over and says “all yours mate”. You then keep driving. Other than that, while all is normal you do what you like. I would not be heads down watching TV but that’s just me.
And again, a driverless car is one that just sits there. What you mean when you say “driverless car” is a self-driving car. Either it can drive itself, or it can’t, and allowing a passenger to take over from the driver is a recipe for disaster.
I know it’s not at the same level of complexity, but my new car will know when I cross the Channel to France later this month and will adjust the headlamps for driving on the right and the speedometer to show kph.
The fully automated taxi in the city will have no need for any input from a “driver”. If it encounters a problem it will stop and call for help. Another taxi will take you to your destination while a (manually driven?) recovery vehicle will take the faulty car off to be fixed.
Out of town, maybe there can be a kit somewhere that can be used to takeover manual control. Even cars with keyless entry have an actual key in the fob to use if the remote fails. Naturally, the car would be limited to, say, 30 mph under manual control and the driver would require a licence. Failing that, it would just call for help as we already do if our car stops working. If the car is on a lease, it would simply be fixed or replaced. If you own it then it might be more trouble.
You are probably right about this. Situations like this account for approximately 0% of my driving over the last five years. I’ll accept this limitation in self-driving cars for all the benefits they offer. Can you give me a ride to the festival?
[/QUOTE]
I think chappachula’s scenario applies to all sorts of parking lots and other such places, not just farmer’s fields. Parking lots are not public roads, so are not generally mapped, and often have non-standard signage or layouts. Humans can cope with this with their common sense, and their ability to understand any kind of reasonable sign. For example, you collect a car from an unfamiliar parking lot. How do you get out? There may be signs saying “Exit =>”. Or they may say something like “Union St. first left.” Or you may see no signs but notice some cars that seem to be leaving, so you follow in their general direction. All things that come easily to human intelligence. Can AI do that sort of general knowledge, improvisational reasoning yet, or in the foreseeable future?
If we have to construct some kind of standardised parking lots for relatively dumb self-driving cars, then we are getting into the area of modifying our infrastructure to suit them, and that gets very expensive. Maybe parking lot operators would have an incentive to have their own lots mapped and submit the data to the autonomous car industry?
How do I give instructions to an automated taxi if I don’t know the address? A common enough scenario for me is to say, “I don’t know the address but it is at the north end of the airport, take me there and I can direct you.”
Do we call for help? If my satnav doesn’t work I just keep driving. If I get a flat tyre I get out, fix it, then keep driving. If the cruise control stops working, I keep driving. If the park assist, rear view camera, lane sensors, etc stop working, I just keep driving. The only thing that requires me to call for help is a busted engine. An unmanned self driving car will need to call for help if any part of its self-drive system breaks down.
The unsafe drivers that we want to design out of cars, do we trust these same people to keep their unmanned cars properly maintained?
I thought I was responding to claims that automated cars need manual override for coping with emergency situations. This is obviously a bad idea because a person who hasn’t been paying attention to the road will not make a good decision.
For situations the car can’t cope with, I think it’s reasonable to allow very explicit instructions from the driver - e.g. display a map or video image on the screen, and allow the person to specify exactly where the car should go.
But if the car is going to just say “it’s all yours!” and switch to manual control, then such a “self-driving” car is only useful for people who have driver’s licenses and are capable of driving. It’s not the disruptive/liberating technology it can be. Reminds me of the Locomotive Acts that required a person with a red flag to walk ahead of every mechanized vehicle.
A reasonable solution. You could even go the Airbus route and provide a control stick with which you can tell the car where you want it to go directly (left, right, forward, reverse, faster, slower) but it’s all input to a computer that decides whether it’s a good idea or not. Try to drive it over a puppy and it won’t let you. Take it across a paddock and it will do it but only slowly.
Well yes, that’s where I see the technology plateauing, with true unmanned cars being too far off for us to be able to predict if or how it would work. Google obviously feel differently.
Fair point on using point and click on the satnav map. What if I don’t know where the hotel is, I know the name of it, but the database has the incorrect address? A taxi driver will usually know the correct address.
Which is impractical for someone who needs a personal vehicle.
For all the situations where you know where you want to go, but don’t know where it is or what it’s called, we can solve this sort of problem with standard google maps style interfaces. You plan your trip via some interface (like, you know, your phone). You need your phone to call the car anyway. Since your phone knows where you are, you don’t need help with that. Now where do you want to go? You can just say, “The Biltmore Hotel”, and a map pops up with a red line showing the proposed trip to the nearest sensible match to the Biltmore Hotel. If there’s more than one, it can pop up a list of the nearest dozen. Or “Do you mean the downtown Biltmore or the one near the airport?”. If you don’t like the proposed route, you grab the red line and drag it to the side streets you want to take. The car will decide if that’s a possible route. And the app will display a charge for that particular trip that will update in real time as you fiddle with the route and establish exactly where and when you want to go.
The interface shouldn’t require you to know the exact physical address of the place you’re going before you’re allowed to enter the process. But even today someone can tell me to go to the Biltmore hotel, and I can google that shit, and when I see there’s two I ask for clarification.
And like you said, for micro control in a dirt road or parking lot, you can expect the customer to get out their phone and show them a zoomed-in map of the relevant area, and accept fly-by-wire input. The customer doesn’t control the car’s movement, the car decides how to interpret the input. But you should be able to draw a path on a map and tell the car what to do and to park over here, and wait 3 minutes here, and stop and let me out here, but wait for me because I’ll be right back. How the car responds depends on what the system believes is safe and legal and if the customer has the authority/money to give that order.
Big parking lots, like airports and shopping malls, can be mapped covering the most frequent places that self-driving cars will go.
If self-driving cars are part of large fleets of shared automobiles, they won’t really need to park anywhere that hasn’t been meticulously mapped. When I leave the car, it will depart for the next user. If there are no waiting users, it can go to a parking spot that the fleet operator has mapped and arranged for the car to park.
Self-driving cars can also be programmed how to park where they live (Teslas do this now). If I buy a self driving car, it will know that I want it to park on the greasy spot in my driveway. Or on my front lawn if I’m so inclined.
Google’s mapping is probably already good enough to identify that “this is the entrance to the parking lot” and “that block of land beyond the entrance is a parking lot.” I believe that artificial intelligence is good enough to allow a system, once on that plot of land to recognize that “This yellow or white rectangle on the ground is a parking spot.” The rest it has to navigate carefully around pedestrians and other cars and hope not to make an insurmountable mistake in the process. You’re right that the signs aren’t perfect and self-driving cars will screw up things like which direction the flow of traffic goes around certain parking lots. You know what? I used to work at a mall. People screw this up too. Sometimes, they squash pedestrians and scratch cars in the process. As long as the self-driving car navigates carefully enough, it will find its way to a parking spot without doing scratching cars or squashing people at least as well as the average driver.
Users can be enlisted to help self-driving cars learn where the good parking spots are. If I am riding in a self-driving car and it is less than 99.9% certain that it has parked in a valid spot, it can ask me. I can answer yes or no and it can learn whether that’s a good parking spot. When enough users flag a spot as “good” (and few have marked the same spot as “bad”), the maps will be updated for all self-driving cars to recognize the spot as good.
As self-driving cars become ubiquitous, parking lot owners will have an incentive to standardize their signs and markings to adapt to self-driving cars if they want those cars to continue to come. “I can never find parking there” will have a new meaning.
[QUOTE=Richard Pearse]
Fair point on using point and click on the satnav map. What if I don’t know where the hotel is, I know the name of it, but the database has the incorrect address? A taxi driver will usually know the correct address.
[/QUOTE]
Your self-driving car will be connected to the internet. If you don’t know where it is, you (or the car automatically based on the name you give it) will find out. If the database is wrong, users will quickly report it and correct it. An internet connected car knows where a lot more things are than any cab driver in the history of mankind. London cabbies have “the knowledge” of London, but they wouldn’t do as well if you took them out to the suburbs. Or to Paris. A self-driving car can have all that information available and more.
[QUOTE=Chronos]
And again, a driverless car is one that just sits there. What you mean when you say “driverless car” is a self-driving car.
[/QUOTE]
Did you really think anyone was confused by his term “driverless car” in this context? This isn’t really helping the discussion.
There might be a hundred different human cab drivers who service that hotel. For every one of them, there was a first time they went to it, when they were likely to be confused about the address. But there’s only one Google, and thus only one time that’ll be Google’s first time driving to that address. Once any one person corrects the address once, it’ll be corrected for all Google cars.
Yes, I think he was confusing himself. Richard Pearse’s use of the term “driverless car” suggests that he’s still got the notion stuck in his head that the human is the real driver. But the whole point of a self-driving car is that that’s not the case.
Even better - each space will have a small transponder which can report if it is empty and where it is. The car, upon entering the lot, finds the nearest open space and orients itself to it.
The long term parking structure at SFO today tells you how many spaces are available on each level. It is a one time charge to fix parking lots, but quite doable.
And it’s not like parking lots are an obscure, niche problem. Everyone knows that a driver (of whatever sort) needs to deal with parking lots, and so the engineers working for Google et al will come up with some sort of solution for them before rolling out production models. What sort of solution that is remains to be seen, but it’s not something they’re going to accidentally overlook.
On the regulatory question, it seems to me that it’s pretty simple. We already have a system in place for deciding whether a driver is good enough to be allowed in control of a car on the road. If a human goes to the DMV and passes a driving test, they’re issued a license. If a computer can pass the DMV’s driving test, then, then it should also be issued a license. If you’re worried that a computer might pass the test but still not be safe enough on the road, then that means that you need to make the test tougher. For everyone.
Can the computer buckle its seatbelt before the test? Turn its head and look both ways at a stop sign. Respond appropriately to “parallel park here, do a three point turn, and then return to the DMV”? The examiner isn’t allowed to assist the driver and neither is the passenger. Will it be able to check in at the DMV to meet its driving examiner? My examiner didn’t come outside to meet me. Will the computer be over 16 years of age and eligible to get a license? If it can’t meet these requirements, it will fail a standard driving test. If we have to rewrite the rules to accommodate self-driving cars, we will not be satisfied if they show only the minimum proficiency demonstrated on a standard driving test. Self-driving cars must do better.