How do they know that particular object is actually a stoplight?
How do they, then, react to Red, Yellow, Green?
How do they know that particular object is actually a stoplight?
How do they, then, react to Red, Yellow, Green?
The google cars use very high precision maps. They know where the stop lights are, and they even know how high the stop lights are. So they know fairly precisely where to aim their cameras to look for the lights.
The current generation of cars apparently doesn’t handle arbitrarily placed stop lights, so if someone temporarily places a light in a location where the car isn’t expecting it, the car will ignore it.
Needless to say, that’s not a good thing.
This boggles my mind!
Every cheap camera today uses facial recognition. Surely stop lights are easier to identify than human faces…After all, a stop light is a specific geometrical shape with rigid mathematical dimensions. It seems like it should be easy to program a computer to do the math. Why can’t it be done?
Cite?
I think they are a little bit beyond that problem now but they still haven’t got it completely tackled. It was initially an issue.
This article explains capabilities a bit. That was a few years ago, not sure if they’ve advanced much.
This is from wikipedia (bolding mine)
Another cite:
Google’s own documentation:
(warning - pdf)
ETA: This article has a lot of technical information in it, by the way.
In any case, the cars have to be able to deal with a four way crossing where there are no lights, or the lights are not working.
To do this, it has an “edging out” behaviour where it makes its intention to cross obvious to other drivers before finally crossing when it has been given way.
I assume it would resort to this behaviour if it reached a crossing but did not see any lights for whatever reason. It would go through but slowly and cautiously.
This is assuming the AI hasn’t been updated to just recognizing temporary traffic lights.
They say the cars can detect unmapped stop signs now. I’m wondering what would happen if I placed a red sign in my front yard that was able to fool the cars but didn’t look exactly like a stop sign to humans (including humans wearing badges). That will be good for an afternoon of laughs. There will be lots of mischief once these things become mainstream.
Yeah, but a stop sign is an easily recognizable form. A red light is just a glowing red light.
Also of course if someone wanted to screw around with traffic and potentially cause accidents there are plenty of ways you could do it with human drivers.
Fortunately the pointlessness and risk of getting caught is enough deterrent it seems.
Another point: Just because something is a problem that Google’s engineers are attempting to solve does not mean that it’s something that human drivers have already solved. For instance, those cites say that self-driving cars have a hard time distinguishing between harmless trash which can be safely hit, and solid objects that must be veered around. This may be true, but human drivers have a hard time with that, too. The difference is, since the car can make use of radar and sonar data in addition to vision (and lidar), there’s a chance that the self-driving cars will be able to solve that problem, and so, since they can solve the problem, they’re doing so.
On the topic at hand, I’d be interested to know how often human drivers fail to notice a new temporary traffic signal on a previously-familiar route.
This is IMHO, but I second your point completely. When I follow the ongoing discussion about autonomous cars, I find it remarkable how much the strictness with which we assess the capabilities of autonomous cars differ from those with which we assess the capabilities of human drivers. In my experience, most human drivers vastly overestimate their own driving skills; they may have decades of accident-free driving, but that can just as much be due to lack as opposed to superior skills. As soon as an autonomous car causes an accident that kills a human (and that will happen sooner or later), there will be a huge outcry, but apparently we’re very much fine with the thousands of people killed by human drivers every year.
Another issue aside from unmapped streetlights is the lack of lane markings on many US streets.
If a car can’t handle roads without lane markings, that eliminates 90% of the roads near me. Rural roads don’t have center lines or edge markers, and they can be narrow.
Conversely, one road near me that does have a center marker would be a bad place for a self-driving car, as any self-respecting self-driving car would try to stay in a lane, and to do so on this very winding road would require slowing way down or squealing around the many corners. Since there is very little traffic, human drivers just cut across the lane markers, and if vision lines are good, drive in the middle of the road, straddling the line.
Also, you know how you know the timing for most of the lights on your daily routine? Like, you know that if you make it through one light halfway through its green cycle, you’ll just barely catch the next one red, or there are three in a row on this street that all change at the same time? Well, Google knows that about very nearly every traffic light in the civilized world, thanks to their army of over a billion route-testers.
But what about the non-civilized world?
For example, the construction site on my street last week.
They put up a temporary traffic light for a few days while they worked. It was a regular traffic light, but mounted on a shorter-than-average wooden pole, and the pole wasn’t verticle. It leaned a bit, supported by a barrel of cement in an awkward location, marked by a few pylons that you had to swerve around.
I don’t think self-driving cars are to the point where you can sleep behind the wheel. I think you still have to act as a fail-safe for the system. Otherwise, people could get in their cars drunk, or sit behind the wheel playing Tetris on their phones, or you could stick your kid in the back seat, and send them somewhere, and we’re not there yet.
Probably something that could really mess up a self-driving car is a street light that is out. If the car knows the light is there, but can’t get any color signal from it, it may treat it as a stop sign, but more likely, it will stop and just wait for the signal.
Another thing that going to be really tough for self-driving cars is construction sites where it’s one lane for a short stretch, and there are those guys with the “stop/slow” signs that let you know when you can use the single lane.
If I sat here for a while, I could come up with lots of things that are examples of why there still needs to be a licensed, sober driver behind the wheel of a self-driving car.
I suspect even before autonomous cars become ubiquitous, we’ll have more smaller scale traffic innovations, including traffic lights and roads that communicate directly with cars. Those will improve driving safety for human drivers now and enable better and easier computer driving later.
Even the worst human driver drives through thousands of intersections without an accident. In a thousand urban miles, there are 8,000 intersections. To me, that is not a convincing reassurance. Driverless cars are useful only if they reduce accidents to near-zero, not just make a dent in the number.
Driverless cars are useful if they make any reduction at all in the number of accidents. They’re even useful if they don’t change the accident rate, or even increase it slightly, because they also have the advantage of freeing up human time. What’s the advantage of having human-driven cars?