How might an autonomous vehicle handle this driving situation?

Forget running over Girl Scouts to save the occupant of the vehicle. Here’s a driving situation I encounter fairly frequently.

Assume there’s a two-lane (each direction) road in a metro area. The right-hand (curbside) lane is open to parking from 7:00P to 7:00A because there are restaurants and shops nearby. When I approach this area, I know to merge into the left lane well in advance when it’s evening, because the right lane will undoubtedly be blocked by parked cars. If I don’t, I will have to come to a complete stop, put on my left turn signal, and wait for a kind soul to slow down or flash headlights to allow me to merge left.

I can understand that autonomous vehicles will attempt to coordinate this type of move, which is basically a merge, but a mixture of autonomous and human-operated vehicles would complicate the maneuver. I normally rely on some pretty subtle signals (a car in the left lane slows down fractionally to allow me in, or flashes high-beams to signal I can merge). In some cases, I might just jam my car into a little space between the cars and make the traffic let me merge. If I don’t, I’ll sit there for a long time and miss the opening curtain for “Spamalot.”

What might an autonomous vehicle do in this situation? It will obviously only attempt to move to the left lane if it considers the move to be “safe.” I can guarantee that it would take a long time for the left merge to be “totally safe.” The exact same thing happens with an accident on the highway. The vehicle has to shift lanes from a dead stop as permitted by other vehicles operated by meat.

If we’re assuming Vehicle to Vehicle communication systems, the car would just negotiate an available opening with all the other cars in the area.

Absent that, I imagine it’d turn on its blinker, plot all the oncoming vehicle’s trajectories, and then it’d use a parameter to account for minimum acceptable reaction times and braking decel rates for other human drivers in the context of merging (possibly tweaked based on how long it’s been waiting at a standstill), start rolling out the instant it recognizes a likely opening (to clearly signal its intent to merge), and then put on max accelerator to complete the merge. The situation you describe (a car in the left lane slows down fractionally) would work well. I think things like flashing high beams and hand signals would probably be less likely for the car to want to consider, given both the complexity of optical recognition and the inherent unreliability of using it as a predictor for human behavior when driving an automobile.

Install a human-like rubber arm in the B-pillar of the automated vehicle. When facing a situation described in the OP, program vehicle to simply extend the arm and merge like you own the road. Traffic will adjust accordingly. Oh, and if honking ensues, acknowledge objections by extend middle finger on the arm.

Performance so far has shown that, in pretty much any situation where autonomously-driven cars have to interact with human-driven ones, the result is “poor” and it’s the human driver’s fault. The way to program the car to deal with the situation is as if the lane is not available from 7 pm to 7 am, and it will signal and prepare to merge left at an appropriate time. Asshole drivers will just plow right on past it and ignore the turn signals, and eventually it’ll be stuck there, waiting.

There is a significant algorithm problem here. There is a nonzero chance that a distracted human driver will fail to brake and plow into the back of our autonomous car.

If the autonomous car has enough experience (since it’s experiences are actually the accumulated data from millions of other autonomous cars) it will even know the probability of that happening.

I know if the lane it is trying to merge is at highways speeds (aka a fatal/severe injury collision is possible if the human fails to brake), the autonomous car is going to wait as long as it takes for an opening. Whether that’s 2 minutes or all week.

At slower speeds, since the risk to it’s cargo and other drivers is low, it’s going to signal, and wait for an acceptably risky chance to do the merge.

Here’s an example of an asshole bus driver doing this very thing.

Ultimately, autonomous vehicles (AVs) will have to drive about as aggressively as the human-driven vehicles (HDVs) around them. If not, like a timid old lady in aggressive traffic, they’ll be both trapped in situations like the OP’s, and be triggering massive anger and extra-aggressive retaliatory driving in the HDVs around them.

IOW, by trying to be extra safe and extra courteous and extra law abiding, they in fact are creating a less safe and less courteous total environment than they would with a more aggressive behavior profile.

So the end state IMO is not that AVs are perfect law-abiding machines. They’ll all in effect have that bumper sticker: “Caution: I drive like you do.” As traffic thins out or calms down on any given drive the car will revert to a more mild mannered persona.

As well, in the early days when 99.99% of cars are HDV the AVs will drive like you do. In 10 years when its’ more like 70% AV / 30% HDV in some areas, the AVs there will drive much more cooperatively and calmly. And in so doing will all but force the HDVs around them to behave the same way as best a mere human can. Do try to keep up dearie. There, that’s a good meatbag.

I got into a discussion with a General Aviation pilot: What will GA be like when GA airplanes are similarly autonomous, and able to fly without input from the passengers.

We came to the conclusion that it wouldn’t be any fun any more at all.

Unfortunately, you’re talking about a problem common in control theory. The simplest PI controller has this problem as well. The controller solves for a control signal output that will change the state of the system to minimize error, but it can’t predict the effect of oscillations of the system itself caused by it’s own outputs.

In this case, the SDC uses neural net policy controllers that aren’t this simplistic and aren’t subject to this particular problem…for things like regulating the SDC’s position and velocity. But the SDC will not be able to predict “road anger” caused by irrational humans caused by the SDC doing it’s best to protect it’s own passengers.

There are some risk parameters baked right into the algorithm, and it would be possible to tweak a few knobs to make the SDC drive much more aggressively, though. Just like tuning a PID so it won’t oscillate.

Interesting input and thoughts. Thanks, everyone! Last time I drove this specific part of the route to downtown, I did find myself stuck for several minutes and two light changes at the corner just ahead. Another driver finally took pity on me and paused their car to allow just a small space (less than 10’) to signal me that I could merge when the light changed. This was so subtle that it got me thinking.

Agreed in general that feedback is a potential problem. Just as it is with human drivers. Since I moved from the Midwest to Miami my own driving has certainly “adapted” to the local conditions.

Which feedback problem might be insurmountable if the AVs can’t communicate with each other. IOW absent intervehicle commo we could get the equivalent of the dueling pricing bots on eBay. If they can communicate and they can each/all say “I’m trying to be as nice as I can”, they’ll naturally de-escalate any positive feedback loop involving only AVs. And probably have the same effect when they’re in the substantial majority even though the HDVs aren’t in on the communications.

As ZonexandScout says just above, drivers now do a lot of subtle communicating with vehicle “body language” above and beyond brake lights and turn signals. Plus of course meat body language like eye contact and arm waves. Not to mention various hand gestures. :slight_smile:

We may find we need to add some extra signaling capability to AVs so they can do the equivalent of waving somebody ahead. As well as add features to the machine learning to observe, recognize, and account for the vehicle “body language” of HDVs.

IMO an AV driving in a world of 100% AVs will be trivial compared to an AV driving on a world of 95% HDVs. It’s a shame we have to go through the hard part to get to the easy part.

They’ll still get the pretty sightseeing. But they won’t get the “fun” of operating the machine.

My bet is, assuming price holds steady, the number of additional people who’d like to try the view is bigger than the number who’ll stop 'cuz they can’t have fun operating it.

For sure the two groups don’t have much overlap.
If you assume instead that new-tech autonomous electrical flying machines cost 1/10th what a C-172 does to buy and/or to operate, then all bets are off. The volume will grow like mad. And most of those will be people who don’t care about the operating fun since they never experienced it. They won’t know enough to decide if it would’ve been important to them or not.

I’ll throw in another data point here. Your statement is essentially that your driving is facilitated by what you know about the road rules and traffic patterns in the area. I would submit that the reason that a company like Google is so interested in autonomous cars is that autonomous driving constitutes what is, in fact, substantially a knowledge-intensive activity, which is a perfect fit for their business model. Specifically, it’s an activity that’s going to be heavily dependent on dynamic maps that are rich in up-to-the-minute information content. IOW, autonomous cars may perform even better than you do because they’ll have even better knowledge, they can be attuned to all the same cues, and they’ll be able to do it not just on Main Street of your home town but in every town.

It’s inevitable that they’re going to have to have this information base especially because local traffic laws are becoming increasingly complex in large urban areas in the interest of congestion control. And guess who’s incredibly well positioned to make potfuls of money providing those information-rich maps?

I have to cringe regarding the concept of mixing autonomous and human drivers, due to human error. Even birds and bees, as well as stampeding animals, can swarm without collision; but humans seem relatively inept at such when guiding vehicles, lacking the same common protocols, and often unwilling or unable to follow the rules of the road.
The benefit of a system of exclusively-autonomous vehicles, is that they would be in constant communication with each other, as well as centrally coordinated with a route by a central server, while each vehicle electronically knowing the other’s exact location, heading, and “intentions;” however human drivers are more than half-blind, and likewise almost all accidents are caused by human error, blind spots, violating traffic-laws, miscommunications etc.
So my answer would be: make all vehicles self-driving, or at least require an override to prevent human error; I’m not worried so much about an HAL9000 scenario, as I am currently concerned about the average motorist who *currently *endangers my life on a daily basis.
Also, it would be a lot safer to drive without doing so in a glass house.