can human drivers "backup" AI drivers?

I hadn’t ridden a bike for maybe 20 years when we went on a bicycle trip down a steep hill in Alaska. I had no trouble with it after the first 20 seconds or so.

If people had to be ready to back up their cars at any moment, no one would be buying self-driving cars. The value for most would be being able to do something useful while driving to work without putting other people at risk. If I had to be alert all the time I’d just drive the car, since that would help me stay alert.
We have AI backing up humans already in collision avoidance systems, and we’ll get more. But if I can’t read while driving down I80 in the middle of Wyoming my self-driving car is useless to me.

Did they do that experiment after the actual accident happened?

I keep expecting to hear an uproar when people figure out that a AI driven car will be programmed to follow laws, including sticking to the speed limit. Between cities, at least around here, traffic seldom moves at the speed limit. I can visualize the early car reviews now, as the first self driven cars get passed by every one else on the road, and an hour+ gets added to the drive time. Personally, I’d gladly accept a longer drive time, if I could sleep or read during the drive, but I think a lot of people will be POed.

Oh yeah. Had to be to be able to know how to program the flight simulator.

I found where I saw this: - YouTube (queued up to the proper time)

Tl;Dw - The test pilots have alarms going off all over the place. They know their first priority is to keep the plane flying. Since they have little to no reliable instrumentation they set the plane into a very particular configuration of engine power, elevators and so on. With that set the plane should keep flying with little trouble. They can then start trying to figure out what the hell is happening.

That did not happen on Air France 447. The junior pilot aboard the plane had pulled back on the control stick keeping the plane in a nose-up stall. The plane just fell out of the sky.

Due to how the Airbus cockpit is configured (different than Boeing) there was no visual cue to the other pilot that the other guy had the stick all the way back. Moments before they crashed the junior pilot asked why this is happening, he has the stick back. The other pilot clued in to what was happening at that moment and took control but it was too late to recover the plane.

OK Thanks for answering. Yep I read about the technical problems that led to the crash. Don’t you think that everyone in aviation became really alert to this problem after the event happened? So any later test was kind of flawed?

Sure.

People bitch about regulations but one of the cool things is we have been relentless in discovering why an accident happened and then worked diligently to make sure it didn’t happen again.

The investigators go to incredible lengths to determine what happened. Kind of remarkable.

The end result is safer planes which benefits everyone.

Aviation will change for the better because of the flight 447 accident.

To the OP it is a case where automation may have been an indirect cause of the accident.

But that’s the problem. With a minimal amount of practice anybody can drive a car down a city street at a sedate 35mph; so can the AI. But what you’re asking for is something beyond that. Using the Uber incident as an example, to determine whether there is enough room to brake safely or whether it is better to try and dodge the sudden pedestrian. And this doesn’t even consider the question of how long it would take to recognize the hazard in the first place, how long it would take to look up from your book or even break your reverie and take action, especially when the hazard was caused by the AI not recognizing it and thus, not doing anything it hadn’t been doing for the last half hour.

Finally, as you said earlier, you have no trouble driving only a couple times a year when you rent a car. Now imagine you don’t have years of driving practice, but only a few weeks, and it hasn’t been several months since you last had a wheel in your hand but ten years. Do you really think you would still have the mad skillz to drive yourself out of a jam the AI got you into?

I have very bad night vision and would have surely hit the woman if driving alone. (In fact, I never drive at night anymore without someone in the “shotgun” seat under orders to watch the road for me).

But shouldn’t the machine have had an advantage here? Even without infra-red sensors, shouldn’t dim motion have been perceived, and speed therefore reduced?

ISTM, the human is not an adequate emergency backup for the AI when it comes to things like sudden evasive maneouver decisions, as it becomes a split-second thing and by the time the human notices the AI is not doing what *he would do, he has lost additional tenths of a second. What IS more likely is as Whack-a-Mole pointed out, a category of dual-control-input vehicle with a “manual mode”() that is switched to for, like, going offroad or parking in private nonmapped spaces, and automatically invoked as “restart in Safe Mode” limp-to-safety option if the sensors get damaged or some other part of the automation is well and truly on the fritz.

(*In reality still an electronic-control mode, just responding to a “dumb” controller in the human’s hand, rather than to the AI)

Sort of like what happens already with a manual shift transmission, for many American drivers.

I never said you could out drive the AI and I have agreed with everyone else that being a backup to the AI for emergencies is a bad idea and would not work.

You are likely incorrect. Both that self-driven cars will follow all traffic laws and that people would be outraged.

AI driven cars will be programmed to fit safely in the flow of traffic. Very early on in the development of AI cars, I remember a story about how they had to tweak the algorithm to be more aggressive at 4-way stops. Because the AI was trying to wait until all other cars had fully stopped before proceeding. That’s what the law says, but that’s not how humans drive. Humans start moving slowly, look to see if someone else is going, and react.

The Uber that hit the woman was driving over the speed limit when the collision occurred. So they aren’t using speed limits as hard limits either.

And the reason people won’t care is exactly the same as the reason you don’t mind: taking a bit longer to get somewhere by car doesn’t matter as much when you can do other stuff in transit. Your reaction is the common one, not the outlier.

The other saving is that freeway drivers often aren’t zooming along at 5 mph over the speed limit. Often they’re stuck in bumper to bumper traffic. Eliminate that through efficient packing of cars onto freeways and you’ve eliminated 90% of the hassles of commuting. Even if you have to go 65 instead of 70.

That’s already happening.

My daughter and son in law live and work in significantly populated areas and at 30 don’t drive at all.

Lyft, etc., plus public transit and delivery options are enough to make car ownership costs and logistics unattractive.

I’ve seen two separate posts on a Tesla forum from people who have gotten into an accident (rear ending another vehicle) while driving on autopilot. Both described the last thoughts before impact as being something to the effect of “Is autopilot going to handle this or not?”

They obviously noticed something was wrong with their approach speed ahead of time, but wheras a normal person (without self-driving vehicle functionality) would immediately use their foot to brake, their first thought was to hesitate, unsure of whether or not the autopilot had recognized the issue and was going to brake for them. That split second was all it took.

That Uber accident wasn’t about reaction time. I suspect that this case shares something in common with the Tesla self-driving accident - they were both failures of pattern recognition by the AI.

In the Tesla case, the car hit a white van silhouetted against a light overcast sky. My guess is that the AI simply couldn’t recognize the van as a van in that situation. Humans are still better at generalizing and figuring out what something is when lacking a lot of visual cues.

In the case of the woman crossing the street, I’ve heard that her bicycle was festooned with shopping bags. A person walking a bike covered in shopping bags may have simply confused the AI to the point where it just decided there was nothing worth worrying about. It also happened at night, so the AI would have had a shorter amount of time to figure things out once she appeared in the headlights.

Pattern recognition will get better, but until we have AI capable of general intelligence and with the ability to use pattern recognition along with context and judgement, it’s still going to make mistakes.

As for a human driver just taking over in a split second - not a chance. It takes a long time for someone to develop situational awareness if they haven’t been paying attention all the way - and it’s nearly impossible for people to pay attention constantly for hours if they aren’t taking active part. This is a human factors issue that’s been worked on for decades in aviation and on assembly lines and other places where humans supervise and sometimes have to override a process.

Airplanes are actually an easier environment for handing control from an AI (autopilot) to a human, because rarely do such situations happen in the air where the difference between life and death is measured in milliseconds. A typical situational awareness problem in the air might be a guy flying along on autopilot with GPS, and the plane suffers a catastrophic electrical failure and the panel goes blank. Suddenly you have a pilot who isn’t sure where he is, what the mechanical and fuel situation in the airplane might be, etc. It takes time for that person to get back up to speed.

Now imagine you are sitting in your car, trying to pay attention but after two hours you find yourself thinking about a problem at work and not really paying close attention to what’s going on. Suddenly an alarm goes off and the car says, 'YOU HAVE CONTROL!" Uh, what? What’s going on? Am I going to hit something? Is the road ending? Did the car’s computer fail? Where am I? What are the road conditions like? Can I brake heavily?

When we’re driving, we’re internalizing a lot of this data and it’s floating around in our brains helping us to make decisions. If you aren’t already engaged, it takes time to get your head into the game. And in a situation where the AI suddenly can’t deal with an impending crisis, you don’t have that kind of time.

If the plan is to have humans take over when the AI gets confused, it’s going to lead to a lot of accidents.

No they won’t. Not if the flow of traffic is 20 km/h faster than the posted limit. Which is pretty common around here.

You just touched on the next problem coming for self-driving cars - what happens when other humans start gaming them. Driving is partly a social activity - we behave the way we do in part because we don’t want to incur the wrath of other drivers, or have them think less of us.

Take that 4-way stop: That’s essentially a social negotiation. Why do people dutifully wait their turn, instead of trying to jump the order for their own benefit? One reason is because of the fear that another driver will do the same and won’t see you and there will be a collision. But another is that you don’t want to look like an ass.

What happens when we add autonomous cars to the mix? If someone sees an autonomous car driving up to a 4-way, all those reasons for behaving reasonably go out the window. There’s no one in it, so no social pressure. You know it will stop for you if you run the intersection, so no fear either.

The same goes for cutting off cars in traffic. You need to make that next exit, but there is a car beside you. If you know that car has sensors that will spot you, and the car WILL move out of your way if you cut over, and no one will chase you down and open a can of road rage on you, it’s going to be tempting for a lot of people to just go ahead and force that other car to manoever out of the way.

This is the next level of complexity self-driving cars still have to meet - what happens when enough of them are on the road that there start to be systemic effects, such as other drivers modifying their own driving habits to take advantage. We have absolutely no idea how that’s going to shake out.

It was going a big 3 mph over the speed limit. That may have just been within the hysteresis of the car’s speed limit range. Control systems don’t hold exact values - they have hysteresis. If your furnace always turned on at 20 degrees and always shut off at 20.1 degrees, it would be clicking on and off constantly. So instead, it waits until maybe 19 degrees before turning on, and 21 degrees before turning off. Adding that little buffer makes the system better. And so it will be with self-driving cars. The first thing I saw mentioned after the accident that Uber felt like 35 mph was the maximum safe speed for the technology, probably plus or minus a few.

I think you are generalizing your own preferences to everyone. Not everyone has two hour commutes or sits in stop and go traffic all day. I live in a big city, and the longest commute I ever had was about 45 minutes. And the average was probably more like 15. I’m not going to be getting out a book for a 15 minute drive. And I rather like driving. For example, my wife and I shared a commute, and I HATE being the passenger. It’s much more rewarding for me to drive, because otherwise I’m bored. And I can’t read in a car without getting motion sick.

As for always taking an Uber - when you have a 15 minute commute, an Uber makes no sense. And I’m almost always stopping on the way home to pick up milk, or picking up my kid from school or the bus station, etc. A big part of the use of my car is to simply run out and get some milk from the local store before supper or similar activities, for which Uber would be crazy.

I suspect that most of the real evangelists for self-driving cars are people in Silicon Valley or LA or New York, where cars are generally a pain in the ass, commutes are horrible, there’s very little ‘car culture’, and population density is such that cabs and ubers are the most common way of getting around.

Don’t forget that large swaths of the country are not like that at all. People love their cars, use them for all sorts of things, and don’t mind driving. It’s going to take a lot to get them to give all that up. I’m not sure safety would even do it, as most people seem to have already discounted or internalized the risk, and it’s going to take a very long time before they feel that driving themselves is substantially riskier than letting a machine do it.

Ahhh, LA, where the car culture is so tiny that they made that famous song about it, “Nobody Walks in L.A.”.

I live in Boise Idaho. Traffic isn’t that bad, and I have a ten minute commute, but it doesn’t change the fact that driving is dumb. Half the time I zone out while I’m doing it, it’s so boring. (Fortunately I have a self-driving subconscious - though it has taken me to the wrong place once or twice.)

I would get a self-driving car in a heartbeat, presuming that that heartbeat was reasonably affordable.

Well, as I can neither sleep nor read in a moving car, a self-driving car won’t help me there. But I also live in a place where there can be extreme weather and difficult conditions. Like, I don’t see a self-driving car handling a trip up to a ski area or the trip home. Trip up: snow, slick highway, possible detours, chain rules. Trip home: really really bad traffic jams plus snow, slick highway et al. In these situations you need to be alert. This is where the AI should back up the driver and not the reverse.

And in fact one decision I sometimes make, and should make more often, is, not to drive at all. Does AI handle a white-out blizzard better than a live driver? Does it handle other cars’ bad driving in slick conditions better than a live driver? I’m skeptical. These conditions can arise quite suddenly. My current car very helpfully tells me when my wheels are skidding. Thanks, car, I already knew that. What if the AI thought, “Okay, time to shut it down and not drive in this”?

I’m old enough that I should only worry about this for my grandchildren. I think their parents will teach them how to drive (one of them already does–and she drives a manual!).