How intuitive are self-driving cars?

Can self-driving cars read and interpret signs? Suppose a piece of trash were obscuring part of some of the lettering. Suppose a piece of paper turned “DO NOT ENTER” into “DO NOT FVTER”

Such as “Speed Limit 20 when lights are flashing” or are speed limits pre-loaded into the computer?

What about “Road closed”. Are they intuitive enough to know an alternate route?

“Do not enter 4pm to 7pm”. Do they know where the time zone boundaries are located?

When faced with a solid green light, the car is smart enough to wait for oncoming traffic to clear before making a left turn. Suppose there was a printed sign that said, “Left turn on green arrow only”. Would it wait for a left arrow? Suppose the green arrow was pointing to the right (as well as the solid green light). It is a green arrow as the sign says.

Unless things have changed fairly recently, most stuff like that is pre-programmed into the cars. I think even the approximate locations of the traffic lights are pre-programmed so that the car knows where to look for the signals.

Finding alternate routes is fairly easy. After all, mapquest, google maps, and GPS mapping systems have been providing alternate map routes for quite some time now. Dynamic path finding algorithms are pretty well developed at this point.

If MS Word can auto-correct typos, then your example is trivially easy. But even if it can’t make out the words on the sign, there’s enough other information there to know what the sign is/means. Coincidentally, I recently watched this TED talk in which the presenter demonstrates software that can recognize and track a variety of objects in its field of view in real time. It can tell the difference between a dog and a cat, even when those animals are seen in a wide variety of poses. Fast forward in the video to 4:30 to get to the good stuff. If it can recognize the sorts of things seen there, then it’s fairly easy to get it to recognize standard road signs, even when they are damaged or partially obscured. In short, any mostly flat object that appears to be roughly octagonal, red with a white boundary, and some white text in the middle, and facing more or less in the direction of the car, will be regarded as a stop sign that needs to be stopped for.

Scheduled road construction projects usually post “DETOUR” signs for drivers to follow around the closure; self-driving cars would certainly be able to do the same. In the event that such signs aren’t present, common mapping software can get plot new routes easily enough. Vehicle interconnectivity can also help with this: just as existing mapping/routing software uses traffic data provided (in part) by other smartphone users, vehicle interconnectivity could allow other autonomous cars to report “hey, Xth Avenue is closed between Y and Z Streets,” and other cars could incorporate that info into their routing decisions.

Time zone boundaries are well established; as long as the car has GPS, it knows exactly where it is relative to any given boundary.

ISTM that such signs are redundant, i.e. you could remove them and there still would be enough information to know that you’re supposed to wait for a left-pointing green arrow. A solid green always means you can turn left when clear but must yield to oncoming traffic, but if you’re only supposed to turn left on a green arrow, then your light will never display a solid green; it will either display solid red (or possibly a red arrow), or a green arrow.

If computers are so good at identifying objects like that, then why are the traffic lights so poorly programmed? The light program should be able to look down all approaches and identify all traffic and determine how fast it’s going. Then it can dynamically change the timing of the light so that waits at the stop lights are minimal or even non-existent. If necessary, it could even eliminate the yellow light, if there’s no traffic that would need to see it. In practice, traffic lights don’t even notice I’m at the stop light when I’m on my bicycle.

That sort of machine vision/object recognition is a fairly recent development, so it’s not surprising that you don’t see it out in the field just yet.

Also, if the existing signal systems do an “adequate” job of managing traffic flow, it gets hard for the city council to justify the greater expense of new-fangled traffic signals. There are a lot of other things to spend money on besides traffic signals - roads, schools, police, fire, etc.

Lately, for the (human) dum-dums, some intersections have a blinking yellow left arrow for an unprotected left turn after the protected left turn has expired.

I’m not sure signs do any good anyway.* Some years ago I told how at an intersection with the then new "Left Turn Yield on <green ball> signs (with no protected turn), when the light changed a kid coming the other way tried to make a turn, stopped short and honked, shaking his fist at me, then proceeded to drive into the path of the car following me. I hung around to be a witness and watched him argue with the other driver and me, then the cop about what the sign meant.

  • Other than specific information ones like No Left Turn 3pm to 6pm.

It’s much easier for the car to determine one situation than it is to program an entire city to cascade changes. Remember that one car coming in from a side street requires lights on the main street to change for many blocks in both directions. And that effects the side streets for all those blocks as well as major cross streets, which effect all of their streets and so on. It’s a bit like the three-body-problem in physics: it can’t be fully solved and a long series of approximations must be calculated to get the best solution. But cities don’t have computers with that much speed and capacity, let alone the cameras and/or sensors placed everywhere.

All big cities have some level of programming for their lights. But some of them put them in back in the 1990s. How often will they get replaced? Where will the money come from? Is the extra time gained for individual cars worth the investment? Should they do it at all or wait for self-driving cars?

I know this sounds like “It’s a good idea, they should just do it.” But in the real world, it’s horrendously complicated and expensive.

So what was the kid’s mistaken interpretation of the sign? That it meant that everyone else was supposed to yield to the car in his position?

I assumed the question was more about adaptive reasoning rather than the ability to determine if a defined school zone is active or left turns are allowed during certain hours. I’d be more concerned about situations where traffic patterns are altered from the defined norms, such as when freeway construction routes traffic into a single lane on the opposite side or a traffic accident has a human directing flow with hand signals. I suppose the default would be to pull over and ask for help but these are important corner cases.

Regarding traffic lights and lack of intelligence (or money to implement intelligence), I’ve always held the opinion that the government should do there part to improve fuel economy. It’s easy to place the burden on auto manufacturers and consumers but every time I have to stop and restart at a light with no cross traffic, fuel is wasted.

The OP’s stuff is easy compared to what some folks really need to do with a car. Good luck getting up my driveway. Or plowing it after 18"s of snow. Not gonna happen soon. Not in my life anyway.

So don’t have traffic lights connected like that. Just have each one doing optimization for its own intersection. It’d probably work better than what we have now.

You have a point about the replacement rate and expense. But unless the selfdriving cars are going to be talking to the traffic lights, I don’t see a reason to wait for them. Optimizing lights for human driving will benefit the selfdriving ones too.