I agree with you—robo-cars will still need human input.But right now, Google is proudly saying that their cars do not allow any human input.
And I just realized that this raises another very interesting point about the future marketing of robo-cars: local laws often vary, so it will be difficult to sell cars between jurisdictions-- and especially between countries.
A Toyota sold in Japan will have to be programmed differently than the same car sold in Italy, etc.
And maybe even between different states within America.
That might slow down the public’s adopting of robo-cars. Even after sales start strong, say, in Clifornia, a bug in the program might cause a string of accidents in other areas. That’s gonna be bad for marketing.
Obviously it allows human input - how else would you tell the car where to go? I think what they mean is the car doesn’t have steering wheels or pedals for manual control.
Google is already testing the car all over the country. And it’s trivial to load a database of all local traffic laws, and follow the correct one based on location. The Nissan GTR already allows faster speed on a racetrack than on public roads, based on GPS location data.
I think some people seem to only accept self-driving cars if it’s 100% safe. But the reality is, if it’s safer than the average human driver, it will save lives. And that’s not a very high bar.
You may think your driving skills are much better than average, and therefore you’re better off driving manually than trusting a computer. Unfortunately, most of us think we are better-than-average drivers.
I’d be interested to know how fast a self driving car will go around blind corners. Technically it should go slow enough so it can stop in the distance of road that’s visible (or even half the distance to allow for a head-on event.) People tend not to do this because it is impractical and the risk is small. On some roads this could make for a tediously long drive.
I will say that as a cyclist I look forward to all of these self driving cars driving at ~16 kph up a hill, patiently waiting for a safe place to pass, while the passenger rages impotently inside the cabin and the traffic builds up behind them.
If you are going to be that obtuse I will rephrase. There are currently vehicles being tested that are intended for use on public roads and that can also fly. Just because something is being tested doesn’t make its success a foregon conclusion. Indeed one of the possible outcomes of testing is that the tested device ultimately doesn’t work as hoped.
Technically true but these vehicles need to be bought by people, therefore perceptions matter. If most of us think we are better than average (quite reasonable by the way, a few really bad drivers can bring the mean down below the mode ;)) then most of us might not want to have a driverless car. We might all think it’s a great idea for other people to be in driverless cars but not us, we want to drive ourselves occasionally thanks.
This isn’t just an issue of technology, it’s going to require personal acceptance on a large scale by people. It’s also going to require legislative changes. Those obstacles themselves may prove to be too big to allow driverless cars (I don’t doubt we will have self driving cars, we already do to a lesser extent.)
Every jurisdiction I’ve lived at allows crossing the no-passing line if necessary to safely avoid a road hazard.
Y’know, in automotive threads here I often end up in the “device to take me from A to B” camp. But even so, I find that some posters in this one overestimate the speed of arrival of the day when not just individual driving but even *individual ownership *becomes obsolete. If that ever happens.
That aside, though, sophisticated enough processing would surely make robodrive far safer and more efficient than human driving – though it brings up the issue of who does the liability insurance cover: carmaker? programmer? (We know that the companies will seek to price out human driving by sending its rates thru the roof). Similarly, if I get in my private robocar at 0.12% BAC, to have it take me home by itself, am I still DUI? How’s that different from having robocab or droidlyft do it?
Since land vehicles have the advantage over aircraft that they are already on the ground, what robocar would need in a failsafe mode would be that flipping the big red switch or detecting that the left rear wheel just departed the axle or having a system crash, will trigger an absolute override to find its way to the shoulder and stop as fast as it’s safely possible, and signal for aid. I wonder if people who’d fear this could happen by glitch in the middle of the blizzard or of the desert or of East Baltimore don’t fear their regular hand-driven cars may conk out in the same places.
The general reason for a difference between computer and human would be “different sensory inputs and their evaluation”. And it seems unlikely that the car and the human would always see the same things and process them the same way.
To take an example from my experience: A woman riding a bicycle on a cross street was trying to avoid a dog and crashed, injuring her knee. She was a good 30’ from the road I was on, but obviously in need of help - so I promptly stopped to assist. It would be strange if a self-driving car would do this on its own (indeed, software that undertook to do this would introduce enormous potential for problems).
Nobody is saying it’s the same thing, but it’s useful to compare and contrast the differences. Hopefully this is also educational since most people are likely not aware that most commercial flights are on autopilot much of the way, often including the landing.
The situation is actually the opposite.
A full-journey self-driving car “autopilot” is a much more difficult challenge than an aircraft autopilot. It took many decades before commercial airliner autoflight including low-visibility automatic landings were routine. Commercial aircraft Cat III autolands require triple redundant autopilots, triple redundant radar altimeters, and triple redundant ILS or GPS receivers.
Despite this with aircraft you don’t have a situation where a sudden autopilot failure will result in death within 1/2 second. The pilot is always alert to take over, and when this happens there is generally time to evaluate the situation. The distance spacing and rate of closure between aircraft and impact objects plus the inherent stability and controlled flight path ensures that. Despite the instrumentation and computer requirements, the aircraft does not deal with computerized object recognition of unpredictable surroundings.
By contrast a car is often within inches of oncoming vehicles. The slightest autopilot error for a fraction of a second will result in a collision. The car cannot simply use a high-accuracy navigational map like a plane. The car must also include various image recognition technologies of unpredictable, ever-changing surroundings. This must be extremely accurate and reliable. While current cars with self-driving features like the Tesla require the driver to stay focused for an immediate takeover, the ultimate goal is avoid the driving having to stay “spring loaded” for this. That is a much more difficult challenge than a commercial aircraft autopilot.
The progress to date is self-driving cars is amazing and Google achieving an average of 1,240 self-driven miles between user interventions is impressive. However that’s not nearly enough for generalized unmonitored self-driving vehicles. As they get closer to the ultimate goal it will probably get exponentially more difficult.
In a situation the car hasn’t been programmed for, something will happen. It may not be very predictable in advance. It may result in a bad outcome. Human drivers also often get into situations for which their training is inadequate, like hitting glare ice, being stung by a bee, being distracted by a cell phone, letting the rear end slip out, ad nauseum. You seem to assume that if the car doesn’t know what to do, the outcome will be bad. I don’t know that the outcome will be any worse than when the person doesn’t know what to do and I expect that it will often be much better. I also happen to believe that by the time self-driving cars are marketed, they will prove that self-driving cars experience these bad events less frequently than human-driven ones. People make a big deal out of the fact that autonomous cars can’t handle the snow; my experience is that people only think they can handle it well.
Yes, but that personal acceptance can take as long as it needs to. When every taxi, Uber, and airport shuttle is a self-driving car, and those things become inordinately cheaper than people-driven alternatives, consumer acceptance will increase. When people realize that operating and insuring a self-driving car is much cheaper than a people-driven one, the switch will continue apace.
Google took the position a while back that self-driving cars were basically legal everywhere in the U.S. right now as long as there was a licensed driver behind the wheel. I haven’t seen anything to contradict them. We might not need any legal changes at all for self-driving cars to become commonplace although some changes in law could definitely help them along. Fortunately, changes in the books where we print the laws are relatively cheap.
[QUOTE=chappachula]
Example of when there are no rules:
You drive to a rural area to attend a music festival, or the county fair. The parking area may be a farmer’s cornfield. Will your robo-car know that it’s okay to turn off the asphalt onto an open field of dirt?
[/QUOTE]
You are probably right about this. Situations like this account for approximately 0% of my driving over the last five years. I’ll accept this limitation in self-driving cars for all the benefits they offer. Can you give me a ride to the festival?
In truth, if I have a fully autonomous car, it can drop me off as close as it can manage and then move itself out of the way to a parking spot it can navigate to. If fully self-driving cars are both ubiquitous and entirely incapable of being manually driven, festival organizers will have to figure out how to make their muddy parking lots work with the self-driving cars. This problem will be worked out well enough.
[QUOTE=chappachula]
And I just realized that this raises another very interesting point about the future marketing of robo-cars: local laws often vary, so it will be difficult to sell cars between jurisdictions-- and especially between countries.
[/QUOTE]
They also vary every time you cross state lines as a driver. They generally don’t vary enough to matter which is why you don’t need to pass a licensing test in every state you visit. Where differences in law matter, I would bet that the car can be programmed to follow the correct law more easily than drivers can be trained to do so. I only needed to get 70% correct on my license examination to get my driver’s license. Self-driving cars will probably be programmed with a better than 70% accurate understanding of the laws everywhere they go.
No self-driving car can plan its driving strategy based on the idea that every oncoming car will veer into its lane and it must be able to stop before that happens. In fact, it wouldn’t matter if the self-driving car could stop instantly; the other car veering into the lane could still hit it at full speed. The self-driving car can only be responsible for its own momentum. The reasonable rule is that the self-driving car should probably not drive faster than the distance in which it can sense a danger in its lane and stop. Of course, the self-driving car has much better reflexes, and if its cameras and sensors are located on the car’s extremities, it probably has a much better view of the upcoming road than a person. Finally, if self-driving cars are networked, they can report traffic conditions including clear roads to each other. It’s probably inordinately safer for the self-driving car to cover that blind curve faster than a similarly-situated person.
[QUOTE=scr4]
I think some people seem to only accept self-driving cars if it’s 100% safe. But the reality is, if it’s safer than the average human driver, it will save lives. And that’s not a very high bar.
You may think your driving skills are much better than average, and therefore you’re better off driving manually than trusting a computer. Unfortunately, most of us think we are better-than-average drivers.
[/QUOTE]
In 2014, 56% of the people killed in automobile accidents in the U.S. were killed in single-vehicle collisions. So over half the people killed in the U.S. die in accidents where they have no one but themselves or the drivers they entrusted with their lives to blame. In close to half the remaining deaths, that was still probably true. People are terrible judges of their driving ability and as soon as we can replace them with a decent alternative, the better off we are likely to be.
[QUOTE=joema]
A full-journey self-driving car “autopilot” is a much more difficult challenge than an aircraft autopilot.
[/QUOTE]
Yup.
[QUOTE=joema]
While current cars with self-driving features like the Tesla require the driver to stay focused for an immediate takeover, the ultimate goal is avoid the driving having to stay “spring loaded” for this. That is a much more difficult challenge than a commercial aircraft autopilot.
[/QUOTE]
Yup here too. Self-driving cars can’t rely on a driver as a backup unless they can (1) give the driver ample warning that it needs to take over and (2) “fail safe” if the driver doesn’t take over in time.
[QUOTE=joema]
The progress to date is self-driving cars is amazing and Google achieving an average of 1,240 self-driven miles between user interventions is impressive. However that’s not nearly enough for generalized unmonitored self-driving vehicles. As they get closer to the ultimate goal it will probably get exponentially more difficult.
[/QUOTE]
I agree that it will get exponentially more difficult to make self-driving cars good enough. I suspect that this is the problem Google is facing now. Their self-driving cars were pretty good five years ago. It’s taking exponential time and money to make the smaller improvements that Google believes will make them good enough.
But this problem is like running away from a bear in the woods. I don’t need to be faster than the bear, I just need to be faster than my companion. Self-driving cars don’t need to be perfect, they just need to be better than people. People aren’t that great so I think this is very doable. Being better than people may not require the same level of systems perfection and redundancy that air travelers and people who live under airliner flight paths demand in airplanes.
Yes, but there are also plenty of aircraft that cannot be flown manually. A self-driving car that can also be operated by a human driver has certain advantages. And a self-driving car that doesn’t have redundant human operable controls has different advantages. If the advantages of one model outweigh the other, you can adjust your purchase choices accordingly.
What you are missing is incremental development and improvement based on the experience of millions of cars.
Have you ever had a really odd problem with your computer, and then Googled it to find that thirty other people had the same problem and the solution is posted? Back 30 years ago you might have said that computers could never get too complex because they’d be too complex for the average person, and the user manual could never cover everything that could go wrong. But here we are. Not to mention lots of set up that I as an expert used to have to do is done automatically now.
Now I agree with you that Google is making a mistake trying to push cars people can’t drive. This is a best is the enemy of the good situation, and I doubt the regulators will allow it, or not for quite a while.
When I park on the field at our county fair I follow the directions of a parking attendant. It will be a while before a car does that, (forget about not parking on grass which is easy to fix) so we’ll need manual control. But you won’t in 99% of the cases. (Or more.)
Most traffic laws, I think, have exceptions that allow you to break the law to avoid crashes. No reason that couldn’t be build in also. So, even if the first release drives better than only 50% of drivers, in 10 years it will drive better than 95% of drivers.
Airliners have highly trained pilots who are required to pay attention to the situation throughout the flight, and getting paid to do so. An average car driver who has been watching TV on his phone cannot be expected to react to emergencies the same way an airline pilot can.