How do self-driving cars deal with stoplights?

I would imagine if the technology is advanced enough to detect where a license plate is on a car, even one whizzing by at 60MPH, read the plate, OCR the image and successfully find it in a national database, determining where a signal light is and acting on the color wouldn’t be all that difficult.

And if you watch the Motorway Cops UK reality show, you will see that ANR (Automatic Number Recognition) is a mature technology in constant use to catch drivers lacking insurance, a tax disc, inspection certificate, or a driving license. Finding a stoplight in an image would be easy.

80/20 rule is in effect here. With 20% of the total effort, they’ve easily conquered 80% of the scenarios.

Now, it’s okay for the remaining 20% of the problems, many posted here, to require 80% of the total effort.

When this happens, naysayers chime in to point out how things have slowed down and point out why X technology isn’t as possible as thought… blah blah blah.

This driverless car effort is incredibly typical, which bolsters my confidence that we will have fewer accidents/deaths and chase problems that are unique to driverless cars, but still have fewer problems overall, and then the autonomy will be pulled back and tweaked slightly… while new complaints about driver disengagement arise.
.

If I may quote CGP Grey on the subject, “They don’t have to be perfect, they just have to be better than us.”

I’ll drive in town, but it would be sweet to have the car do the driving for me on the Interstate. That’s a controlled environment where the thing should have zero problems.

I personally know a person who is profoundly disabled because his mother failed to notice a new stop sign on a familiar route.

The city where I grew up changed a T-intersection into a regular street crossing by extending one of the streets. As a T intersection, only traffic coming from the bottom of the T had to stop. Weeks before opening the new street extension, they posted signs about the new traffic pattern. Then, still weeks before they opened the extension, they added stop signs to all approaches to the intersection so everyone had to stop. Finally, they opened the extension. There were multiple accidents each day for weeks because people who weren’t used to the stop signs completely ignored them. Eventually they put a traffic light there.

A neighboring town also added their first (and still only) traffic light. Originally, traffic on main roadway always had priority and two side streets on the north and south sides of the road had stop signs. When they turned on the traffic lights, there were again multiple accidents each day as people used to having the right of way on the main road completely ignored the light.

The light cycle there lasts at least 90 seconds, so during the relatively busy ten hours per day at the intersection, the light turns red perhaps 400 times. Each red light should trigger two cars to stop (one on each side of the intersection) or roughly 800 stops per day, assuming there is always a car approaching the intersection. They probably averaged four accidents per day there for a couple weeks. Four accidents per day means that 0.5% of the 800 drivers who should have stopped at the red light failed to negotiate that intersection correctly and caused an accident. Who knows how many more had close calls but didn’t actually swap paint.

I agree. It’s really a cost-benefit calculation. How much do self-driving cars cost society? Very few people are proposing separate streets or wild new infrastructure changes for self driving cars. Most of the cost is baked into the price of the car. Perhaps there is some higher marginal cost of operating the car compared to a person driven car (e.g., mobile data usage for the maps and traffic updates) plus the cost of whatever increase in accidents might arise if self-driving cars are worse than people-driven cars. What are the benefits of self-driven cars? Free time when travelling, ease of car sharing reducing the need for idle cars so reduced rates of car ownership, drunks off the road, increased mobility for the elderly and disabled, new low-cost delivery services, reduced urban infrastructure dedicated to parking cars…

I agree that stoplight detection should not be that big a machine learning problem. Maybe doing it in real-time presents some engineering challenges, but “identify traffic light, determine current state” is a pretty reasonable ML pipeline.

I will admit that there was a light in my old town that switched from red-yellow-green to blinking red at maybe midnight, or 1am, and switched back around 5 or 6am-- that is, it functioned like a stop sign when there was very little traffic. I rarely drove through it in the middle of the night, but when I did, there were a couple of times when I sat there waiting for the light to turn green.

I wonder if people who were used to it the other way, you know, someone who went through it at night a lot, but rarely in the day, ever made the opposite mistake, and stopped briefly for a red light, then proceeded through, and got t-boned by someone going for the green light. I guess if they did, it was not enough times for the city to change the pattern and start having it be red-yellow-green all the time.

One thing about self-driving cars. They won’t run red lights. People do that so much in Indianapolis, I’m surprised it’s not called T-bone City. It’s so bad that at many intersections, there’s a delay before the light turns green, where it’s red both directions, but of course, people know where those are, and think it gives them even more leeway to run the red. I wish the city would get really aggressive about photographing and ticketing those people. Right now, I won’t pull out on green until I’ve looked to make sure that the oncoming cars are really slowing. About once a week someone whips out in front of me right through my green light.

All traffic lights do that. If you enter the intersection before the light turns red, you’re legal, which means that you need to allow a little time for those barely-legal cars to clear the intersection before turning green the other way.

Well, in the ideal case of a four way intersection with stop lights placed in the normal positions (either at the corner of the intersection to the right of the driver or suspended over the lane) but there are many situations in which either light placement is non-standard or the intersection is complex (multi-way, merge on/off intersection, T-K, et cetera). And, as noted above, autonomous driving cars primarily rely upon preloaded map data to determine their location and type of intersection. In the case of road maintenance, damage, or traffic, the current methodology is to determine that something is amiss, slow to a crawl, and if it can’t negotiate the hazard it pulls over and signals to the passenger or home base a need for human intervention.

This is adequate for limited usage such as Google imaging, but unfortunately not for truly autonomous vehicles that can operate without any human intervention under all normal driving circumstances, and the problem is that even normal circumstances have a small percentage of abnormal conditions that require judgement. It’s not really an 80/20 problem; it’s more likea 99.5/0.5 problem; most driving requires little more than just keeping the car in the lane and stopping when you see brake lights and stop signs, which takes a relatively modest effort do perform robustly. It’s the fractional percent of more complex situations that will require significant effort to create algorithms to manage, and we’re really not very close to that yet. I doubt we will have truly autonomous cars that can operate with no human input other than stating a destination within a decade, and possibly not within two. Part of the issue, too, is defining an interface that is simple enough that anyone can use it but sophisticated enough to allow complex commands like “Drive through the yard and back the tailgate up to the door”; it’s the classic smartphone problem of making the phone as “smart” as the user without being “too smart” such that the user is confounded by options.

However, once driving control and human interface algorithms are sufficiently complex to perform those tasks, it is unambiguous that automated vehicles will be safer, more reliable, and more efficient. They don’t get tired, or drunk, or have to share attention with passengers. Autonomic vehicle stability and traction control systems are already a standard feature in many mid-range and better automobiles, as well as lane tracking and braking/hazard alerts, specifically because these systems are quantiatively faster in response than a human driver possibly can be. There still remains an issue of culpability in the case of even the handful of accidents which may occur; is it the responsibility of the manufactrer of the vehicle, the producer of control software, or the regulatory authority to pay damages resulting from accidents? From a legal standpoint, that is probably the largest hurdle once the technical issues of a command and control system are perfected.

It would ultimately be desireable to replace all traffic lights and signage with intercommunication between vehicles and automated control systems such that automated vehicles no longer have to follow rules designed to account for sloppy, inattentive human drivers and can optimize driving patterns to get the greatest efficiency and traffic flow, but it will be a long time before human drivers are eliminated completely just because of cultural and legal inertia. Until that time, the greatest technical challenge will remain getting computer-operated cars to interface with a world designed by and for hairless apes.

Not in California; many stop lights change in sync, and what is worse, it is perfectly legal to drive through a red light as long as the car has entered the intersection before the light changed, which means if a cross driver is quick on the throttle he or she can cause an accident.

And then there is Boston. Once we have an autonomous car than can drive through Boston without getting hit or having to pull over, we’ll know that the technology is robust.

Stranger

I have two problems with that approach.

One, there is a very high development, compliance and maintenance cost in both the car and the roadway, which is offset only if the benefit is large enough. A small benefit is not cost-effective.

Two, they are only better than SOME of us, since driver skill is not a constant. Many drivers can, quite correctly, predict that they will have fewer accidents than the tolerance for error in a less-than perfect driverless car. If an average driver has a crash probability of, say, ten, there will be many at five, or two, and many at twenty of thirty. How does a driver with a crash probability of two, gain from turning control over to a machine with a probability of three? If there is an override option for such drivers, a driver with a high crash probability would exercise it, and even the rest of us would gain nothing. So there needs to be a target of near zero to make the enterprise attractive. To me, at least, and my opinion is to object.

Your whole post is very interesting. One thing to consider is that self-driving cars will be able to gather a lot of data and, with wireless links to servers, will be able to contribute to improving the maps available to other self-driving cars effectively in real time. The first car through the non-standard intersection may have trouble but it might be able to: (1) learn from the hairless apes that preceded it through the intersection, or (2) learn from its passengers.

The first way, self-driving car might be able to infer that there is a new traffic light at an intersection because: the cars in its lane are stopped, only cars turning right are proceeding, there are also stopped cars in the opposing lane, cross traffic is proceeding without stopping, and there is something that looks like a traffic light over there. The first self-driving car to identify the light can report it to all the other cars. Therefore, very few self-driving cars will have to go through the intersection before all the self-driving cars can do so perfectly. Contrast that with people’s responses to the new traffic controls I described above.

If the self-driving car can’t learn itself, its passengers can teach it. If you see your self-driving car blow through a stop sign, you can report it the same way that Waze users report speed traps. Users could teach self-driving cars about other difficult situations the same way. Again, that information will be quickly distributed to other self-driving cars. If the next 20 self-driving cars stop at the purported stop sign and ask their passengers to confirm whether there is a stop sign there, the road maps will quickly build have a high degree of confidence that there is, in fact, a stop sign there. This knowledge building could happen much faster than people will get used to new street controls.

I believe this user data is the reason that Google bought Waze.

And far more drivers can make the same prediction incorrectly. Everyone always says that they’re a good driver, and it’s just all those other yahoos are the problem. Worse, you get many drivers who say that they’re so good, that they can get away with idiocy like texting while driving.

As for learning new conditions, Google’s cars can do that even without there being a Google car nearby. As long as there are human drivers near the new condition with Android devices in their pockets, Google will learn about it. I suspect that that wealth of data is why Google decided to enter the smartphone market.

There’s a pretty substantial difference: the level of accuracy required, both in terms of false positives and false negatives.

If the Automatic Number Recognition software sees a sign by the side of the road and mis-identifies it as a license plate, hey, no big deal. It just queries the database with some junk data, and nothing happens. Worst case, the cop gets a bogus alert that he has to deal with.

Similarly, if the ANR software incorrectly reads a license plate, or fails to notice it entirely, it’s not fatal to anything either. You’d like the system to catch as many plates as possible in order for it to be worth anything, but if it misses 1%-10% of all license plates due to weather, lighting, angle, relative speed, etc., it’s not a big deal.

The accuracy required of a traffic control identification system in a self-driving car is much, much higher.

The facial recognition software on cameras (even expensive ones) often fails. Traffic signal recognition is a “safety of life” issue. It would have to be incredibly reliable.

There is wide variety in traffic lights. Some have three lights, some two. Some are vertical, some horizontal. Some have arrows, some do not.

There is wide variety in the positioning and visibility of traffic lights. Sometimes they are mounted almost 90 degrees above the first car in line, and you almost have to stick your head out the window to see them. Sometimes they are swinging, twisting and gyrating in high wind.

Other times a large semi truck in front of you almost totally blocks visibility to the traffic light.

Some traffic lights can be strongly backlit by the sun during sunrise/sunset. These are difficult for a human being shielding their eyes to see, much less a camera.

Sometimes the traffic light degrades to a simple blinking mode. Other times it totally fails and a policeman is there directing traffic by hand. Sometimes it fails and before the police arrive, drivers have to watch each other and mutually cooperate to clear the intersection.

It is lot harder than it seems to perform this recognition with extreme reliability over all these conditions. For safety of life items like this you normally want triple redundancy, so that means three cameras and computers, all of which must somehow vote on the action or combine their inputs and outputs.

There must also be contingency plans for a disagreement between redundant sensors/computers and procedural plans for how much advance notice the human is given. E.g, an aircraft pilot is always spring-loaded to take over if the autopilot fails. Must the car driver stay equally alert? If the driver is allowed to read a book or rest, the places an even higher bar on the autonomous system since it might take 15-30 sec for the human to resume control.

I don’t. I’m an experienced driver now, so I’m better than a lot of people, but I’ve never been a great driver. Whenever someone else is willing to drive, I’m all for it. When I was young, I was a terrible driver. I haven’t caused a lot of accidents because I know I’m not a great driver, so I’m very cautious.

FWIW, though, I don’t think I’m the problem. I think it’s people who take stupid chances, whether they are by some objective test good drivers. I don’t run red lights, cut people off, drive tired, or drive cars that have spiderweb cracks in the windshield.

Which is pretty much what I said. Until you can forecast what the crash rate will be with self-drive cars, you cannot indicate whether driver over-ride is a good idea or not. I was responding to a poster who said it is only necessary to reduce the crash rate by even a small increment, in order to signal the benefit of driverless. That would not be the case, if driverless cars crashed more often than safe, mature, responsible drivers, who would then lose the power to take responsibility for their own safety, and increase their own risk.

The two questions that my post raise are:

  1. By what factor would crashes be reduced in a driverless world, and

  2. Would there be driver over-ride?

I cannot endorse the concept of driverless, without knowing the answers to those two questions.

#1 When driverless cars become a significant fraction of the traffic on any given road, they greatly reduce the number of traffic jams. Instead of start-stop-start-stop, all the traffic can flow more smoothly, which means everyone gets where they’re going sooner and they burn less fuel doing it.

#2 Consider a scenario where you drive an hour to get to a 7 pm meeting, then two hours later you have to drive home again. By focusing your attention on the road the whole way, you arrive at the meeting feeling frazzled and tired and you finally get home exhausted at 10 pm. But if you had a driverless car and you could relax, you might even be able to make some productive use of your transit time, such as reading a book or catching up on your email.

Personally, I’d at least like to see a system where the car has manual steering controls for surface streets which disengage when you get on the freeway and the car becomes driverless until you reach your exit. That would greatly simplify the engineering challenges and still achieve what I want most from driverless cars: relief from highway hypnosis.

You are not addressing driverless cars, you are addressing the concept of mandatory driverless cars.
Plus, someone’s crash index is not independent of other drivers. Even the best drivers cannot avoid all accidents caused by bad drivers. A world where most drivers are in driverless cars will reduce the crash index of even excellent drivers.

I want to wonder about speed limits. An article I read mentioned self-driving traffic moving faster than regular traffic. But since it can only come in increments, it will be a long time before self-driving overtakes human-driving. So a self-driving car will never be able to go over the speed limit, right? Or else how can the insurance companies sign off on them? So s-d car is going 65 mph on the I-10, and everybody else is going 75-80. Except when they come up on the s-d clogging the freeway. You probably have some kind of magic answer for that. Whatever. How about s-d is cruising down the street at 35 mph, cuz the map says so, only it comes up on a school zone. There’s a million of them. All of a sudden it’s 15 or 20 mph When Children Are Present. This is kind of an important one. That may refer to a certain time of day when school is starting or letting out or not. Lot of information to keep you from having to slow to a crawl every time you go down Snowden Street, no matter what time of day.