I’ll note that your example happens within the meticulously mapped area Google has established for the cars (the grey area in the computer display), which makes it easy for it to id things like cars, cones, pedestrians, etc. Note how there are no parking lots in that grey area? It does not go very far in terms of miles, and I can imagine it alone is a job to maintain. Also, there is no rain or snow, which they already admit that their car can’t handle as well as a human.
The Google developers think that their car is as good as a human in the fog, which is good, and already potentially useful even if it is currently prohibitively expensive for all but commercial vehicles or the rich.
I hope you understand the difficult of iterating code for rain driving in Mountain View CA. When it becomes a priority it will be a solvable problem.
The “Gray areas” are from the laser scanner, thus the limited range. While there is some learning it is not unexpected that during testing that they will use the entire scanned set for development.
The commercial vehicle market is the main market, well known freight routes and common taxi routes will probably be the first markets. But this is like any new technology and not unique to self-driving or driverless cars.
You’re confusing number of situations with frequency. If the 90% of situations it handles occurs 99.99% of the time (likely) you may be okay. And I’m sure a production car will handle a lot more of them.
I suspect it will fail safe, in that hitting the kill switch will make the car move to the shoulder if at all possible. But even if the car worked for 99.9999% of the cases I bet it will come with steering as a marketing feature. After everyone gets comfortable, then it can go away.
Two things. First, I assume you’ve never been near defense development projects. They seem to be usually a mess. For instance, there is a requirement to diagnose a failure down to the failing part over 95% of the time. How is this tested? The contractor gets to select which failures are to be diagnosed.
Second, you are missing how software works. If people has some issue which got 10% of them confused when crossing the date line, you’d be stuck with it. This bug can be (easily) fixed and after that no plane will have the problem. Instrument flight gets safer. Manual flight probably doesn’t, except that it depends more on instruments.
Hell, even Windows has gotten better. You don’t start from scratch.
As I’ve noted, there are plenty of accidents in the places where the Google cars drive in the dry summer months.
No one is talking about sales ready cars for at least 15 years. If by magic they became affordable today I bet you’d see a massive (almost said crash) effort to improve the software. No good reason for that today.
I do think I understand the difficulty of creating code for Google’s self-driving car to implement driving in inclement weather. It’s system for locating objects depends on that map to differentiate figure from ground. If the area hasn’t been mapped already, or if the cars’ perception of the area has changed through modification of the roadway or snow being piled in mounds, it can’t deal with it. That’s the position of the developers of the car. To claim that the car is actually better than a human outside of its small sandbox of data runs counter to what we already know about the car.
Yes, but the car has only gone around 700,000 miles in a very limited area. That’s not a lot of miles for software this complex. I know that I went at least the first 500,000 miles without a wreck that was deemed my fault, and I know that I was at least a typical teenager/young adult for about half that distance, probably much worse than average. I was a maniac that engaged in organized street racing for god’s sake - I was nowhere near safe. I still had an accident rate that compares pretty well to the Google car for miles driven so far.
Again, I’m arguing against a car that has no manual control here. If there’s no way to easily get the car out of the state it has gotten into, that 10% of situations will probably accumulate pretty quickly. (Plus, our numbers are coming from somewhere other than data.)
Hey, as long as we agree steering wheels will be around for a variety of reasons for the foreseeable future, I don’t think we’re arguing.
I’ve been involved with several collaborative software processes. They’re always their own mess.
The problem with the F-22 had already been solved in other systems (or at least it hadn’t arisen in them). Re-using that solution does not appear to have been a possibility in this case, and re-using software isn’t a panacea against bugs.
The car absolutely needs to know whether the object moving from left to right across the road is a child or a runaway shopping trolley.
While not desirable, it is not catastrophic to hit a shopping trolley, but it would be preferable to hit another car, or turn off the road into a tree than to hit the child.
An even more difficult thing - ideally, the car would be able to tell the difference between a large dog and a child.
Is it common? No
Would the car be more likely to avoid the situation in the first place (and more likely to be driving at the appropriate speed should the situation arise) YES.
I ain’t driving into a tree to avoid people who shouldn’t be playing on the road. I’m not endangering myself and my passengers for that. I almost wrecked once avoiding a cat, never again. “Brake and determine the optimal path for avoidance. If no path, then it is up to the target to move.” At no point do I think that I’d be quicker at doing that than a computer.
I’ve done two things before - braked to avoid a cat and started to skid - then I realised my mistake and ran over the cat. Had it been a child - I could have avoided by turning up onto the pavement - albeit at the cost of my suspension, rims and possibly some rather large dents.
I would be avoiding a pedestrian pretty much “at all costs”. In most instances - hitting a tree at less than 50km/h is very very survivable in the vast majority of instances (and your actual speed should be far less than this)
You are obviously not hitting them hard enough. If you add a bit of a swerve to the impact, they should just bounce off and to the side. A child going under your car is highly unlikely to survive (bad deal for you), but if you can bounce them off your car, they have a fair chance of only being badly hurt/maimed/paralyzed rather than killed.
This is actually the part that will eventually reduce traffic congestion in a serious way, IMHO.
Once we have self-driving cars, you’ll only need to own as much car as you need nearly every day; the rest of the time, you’ll summon up a self-driving cab/ZipCar/whatever which will drive itself to your door, then take you to your destination, before going off on its next errand.
And since most car trips have just one person in the car, this will lead to a lot of single-occupancy cars. The key here is that they’ll only need to be one person wide, instead of being wide enough for 2-3 people to sit side by side.
Each existing lane on a highway can then become two lanes, doubling the traffic capacity of existing roads.
Part of this picture is that destinations will become more walkable. People will be able to get a car to take them to the one place where they can get six things done, rather than having to make six stops each a mile apart to get those things done. Our car culture has left us with a very impractical urban/suburban layout, which quite simply will have to change.
I don’t think this can be true. Every source I’ve read about the Google Car indicates that they’re processing in real time what the road is doing now, not comparing current image to a past picture. If you have a source or article where any of the car manufacturers are using that model I’d love to see it. Mostly to know which cars to avoid when this becomes a reality!
It certainly is true that for the foreseeable future all self-driving cars will be able to failover to human control.
But comparing the software testing needed for a fleet of self-driving cars to one-off NASA missions is misleading. Why does NASA need to test and retest the code for the Space Modulator? Because they only get one chance, either it works perfectly the first time or you can kiss millions of dollars and years of work goodbye.
Now, you could argue that the self-driving car needs to work perfectly the first time, if we define “perfectly” as “not killing someone”. But cars will get years and years of beta testing in real-world conditions, all in situations where a human driver is ready to take over if necessary. The operation of the driverless vehicle fleet is how bugs get worked out. And yes, getting rid of rare bugs becomes harder and harder and more and more expensive. But the very rarity of the bug makes it lower and lower priority. Yes, it’s bad to have a bug that causes a vehicle to drive into a orphanage. But how often is that going to happen? And we have plenty of examples of bad human drivers literally driving their cars through crowds of people, unable to figure out that if only they pressed on the brake instead of the accelerator the car would come to a stop.
So we’ll have tiers or milestones. At some point self-driving cars will be good enough that they can be let out in the real world, but a driver has to stand by to take control at all times. That’s where we are now. Even at this point self-drivers are already better than drunk guys, dementia patients, or teenagers with learners permits.
Then we reach the point where self-driving cars are better than typical drivers, which includes texters and McMuffin eaters and radio fiddlers. At that point it’s safer to watch a movie in the back seat than it is to take control of the vehicle, and errors from drivers taking over control when they shouldn’t are worse than errors from drivers not taking over control when they should. And at this point, lots of parents start putting off teaching teenagers to drive. Hey, if they can get to basketball practice on autopilot, why teach them to drive right now? Wait a few years until they’re more mature.
Then we reach the point where self-drivers are better than all but the best human drivers. And even top rally drivers can benefit from automated systems taking over in certain circumstances, and those top human drivers will certainly benefit from having most vehicles not controlled by your average idiot. At this point it becomes irresponsible to pilot the vehicle yourself, and we start to restrict drivers licenses to an elite few. Except now that driving is no longer a daily activity for the masses, interest in cars starts to decline, and manual driving is seen as a hobby for the weird, like horseback riding or parasailing. Expect a huge increase in private driving areas for hobbyists who can’t pass the rigorous tests to obtain an unlimited public manual operation vehicle license.
Right, and walkable businesses won’t be expected to set aside acres and acres for parking. Parking will be done outside the central cores, you’ll drive up–or rather, be driven up–to the entrance area, and the car will either take off on another mission for another rider, or valet park itself somewhere else.
That’s my understanding also. Rain does affect the various sensors. But the current cars are not limited to places they’ve been already or places some other Google entity has photographed. Heck, the sides of the roads change too much for that even day to day. Tow trucks park themselves various places, there are accidents to the shoulder, and once in a blue moon there is a Highway Patrol car.
I assume scabpicker uses Bing, since he seems to think Google folks are really dumb.
One of the really difficult things in having a computer drive a car is computer vision. It appears that Google is using the detailed map to allow its laser system to “see” objects by comparing its sensors’ reading to the map, creating a ground to differentiate figure from. This allows the whole shebang to even begin to process the world in real time reliably. When it gets to an unmapped area, it crawls at 2mph.
Where does this crap come from? I’ve made 0 personal digs against my opponents in this thread. Again, you couldn’t be more wrong. I’m a Unix admin - if I didn’t think the commercial search engines were up to it, I’d write my own before I used Bing.
No, I think they’re human. Sometimes they’re really smart, sometimes they’re dumber than hell. Google has the same limitations the rest of do when it comes to having computers deal with the real world.
Again, I don’t think this is true based on anything I’ve read about the Google car - developers are already using them to drive around San Francisco (obviously they’re backup drivers but they’re trying to use it in varied ways). I know they have radar, laser, and cameras to drive but I’ve never read about the system working in any way the way you describe. If you have a source I’d love to see it but I think you’re extrapolating what you think the car is doing based on your own computer knowledge and how you’ve heard about self driving solutions in the past. There’s an outstanding article in the New Yorker about Google’s car where they mention they’re light years beyond those first steps of driving through the desert.
According to this Slate.com article from Octoberscabpicker is correct–Google’s self-driving car is still heavily dependent on an ultra-detailed map, rather than the computer actually being able to do much in the way of directly sensing and reacting to its environment. (I gather the car can do things like look at a traffic light and sense whether it’s red or green–but not so much with spotting a traffic light that it hasn’t already been told is there.)
(For the record, I have no idea how accurate that article is.)