Reason for wanting human controlled vehicles = having someone to blame

I wonder just how much of resistance to removing the person from the controls has to do with the desire to have a easily identified person to take the blame when things go wrong. We are getting to the point that cars, planes, trains, ships can be done without a person in direct control, and there is resistance on several fronts. The arguments seem to focus on the point where total automation is safer than human controlled (when we reach it and it is proven so), yet even at this point there is resistance from some.

Just how much does the perceived need to have a easily identifiable person to take the blame play into this resistance?

I think this is a reason for it, yes. Nobody likes a “blameless accident.”

“sorry Mrs. Smith, your daughter just got run over by the car, she is dead. No, it’s not the driver’s fault. No, it’s not your daughter’s fault.”

Although, presumably the car manufacturer would be the party to sue, but that’s not the same as having one specific individual to blame.

I don’t know how common an attitude that is, but I have heard people talk about that as an actual problem so there’s at least some.

I also can’t help but suspect that a major reason for wanting an* individual *to blame is that the alternative is blaming the company that made the driving system or car. Going after ordinary citizens is always more popular with the powers-that-be than going after a corporation.

It’s also a whole lot scarier when a nameless and faceless system is to blame. Imagine an unmanned airliner crashing and 200 dead because “that’s…the system.” It creeps people out much more than if a human pilot made a big error.

From what I’ve read the legal issues inherent in “who to blame” are a significant obstacle toward this kind of automation.

But in the case of airplanes it seems to be as much a marketing problem. I fly for a living and I’ve heard a fair bit about how passengers aren’t enthused about taking pilots out of planes. They want someone with “skin in the game”, which to my mind isn’t unreasonable. I don’t have a fully formed opinion on this, but it seems to me that when you have busses and planes with dozens or hundreds of passengers, the stakes are higher than in a passenger car with four of five people max. Maybe keeping some backup humans there is not unreasonable.

OTOH, if it’s all the same automation, maybe it won’t matter. I’m actually all for fully autonomous driving. Flying, not so much and not only because it’s my livelihood. Commercial aviation already has a very good safety record without automating everything.

But it’s also a lot harder for an injured individual to recover damages against a big corporation. Besides the difficulty of tracing the actual cause of the injuries back to the corporation, the corporation will have many more lawyers & more money to fight off the injured individual.

There is also the feeling that an experienced, trained human pilot/driver will be able to react to unforeseen circumstances than a programmed computer system. (Most computer ‘bugs’ aren’t really errors; they are situations that nobody expected and so didn’t program for.)

Personally, I wonder if a human pilot/driver:

  • could react fast enough to matter, and
  • would react with the correct action to fix the problem.
    I’m fairly sure that for automobile driving, an automated system could be built with enough sensors & controls and properly programmed to react better to driving emergencies than I could (with my age-diminished eyesight & hearing, and delayed reaction time). It just couldn’t be done in a reasonable mass production cost yet. Probably will be, eventually. Likely sooner for trains, but longer for airline pilots & ships’ captains.

According to the Consumer Product Safety Commission, faulty or defective products cause 29 million injuries and 21,000 deaths each year. That’s about 10 times more injuries and about 65% of the deaths related to cars. We don’t need people to blame for everything.

What’s clear for autonomous vehicles is two things:

First, the technology is shit right now. It needs years and substantial improvement to become widely adopted.

Second, the insurance business will have to adapt.

I think main reason people aren’t rushing to adapt an immature technology is that it is new, strange, and untested. Look how many people complained about cell phones, which was not nearly as a disruptive technology! I would not overthink this issue.

This feeds into something I’ve observed: human beings have an irrational, romantic attachment to the idea that humans have something that machines don’t. We see this (kinda) in the legend of John Henry and in thinly disguised hero’s-journey narratives like Star Wars. (Luke dispenses with his fancy targeting system when its time to fire the shot that blows up the Death Star).

I strongly suspect that fully autonomous cargo flights will happen sooner than most people imagine. The economic incentives are strong and the risk is very low. When autonomous cargo flights start crashing less often than human-piloted passenger flights, there will be a good reason to use fewer human pilots.

That said, I fully agree that contemporary air travel’s astonishingly safe record constitutes a big barrier to automation. To make the transition with passenger flights, the safety of automated planes would have to be not equal to but significantly better than the safety of human-piloted planes.

Still, those who enjoy curmudgeonly harrumphing and who vow never to fly without a human pilot are probably similar to those who responded to mandatory seatbelt laws by saying “I don’t wear a seatbelt because I’d rather be thrown clear.” Well, you don’t get to choose whether you’re thrown clear. And sometimes you’re thrown “clear” right into a tree or into the grille of an oncoming dump truck. But ignoring that allows the illusion of free will, and people like free will even when it’s illusory.

Having someone to blame is always nice. But I think most people who recoil at automated vehicles do so because they’re perceived to threaten the primacy of human beings.

Of course we have things that machines don’t. If I have to choose between fighting a human or fighting a machine, I would prefer the human. The human might show mercy, the machine won’t. The machine has no emotions.

If a human airline pilot notices something that doesn’t “feel” right, he might do something about it. Whereas a computerized, automated airplane that has a software flaw, has a software flaw. No amount of pleading or begging will make it change its flaw. (Look at the 737 Max MCAS crashes, for instance - no amount of reasoning with the computer, “We AREN’T in a stall!” would have changed its mind.)

Now, there are plenty of examples to the contrary, of course - for instance, Pierre Cedric-Bonin disastrously pulling back on the control stick and causing Air France 447’s fatal crash a decade ago. But humans are less rigid, less inflexible, and more malleable than machines.

I’m wary of completely automated transportation of any kind, and the thought that there has to be someone to blame for an accident never entered my mind.

Especially in the aftermath of the Boeing 737 Max crashes where the software led commercial flights into catastrophic situations which the human pilots were not able to overcome.

So what happens when there are a bunch of autonomous vehicles on the road and there are, um, software glitches with no humans to try to override them? Or there are casual/criminally-oriented hackers taking control? Now extrapolate that to airliners. You want to be on a commercial flight when that happens?* Have fun.

*Now I’m picturing the inflatable autopilot from the movie “Airplane” suddenly deflating. :frowning:
**I figure that driverless car technology will finally be reliable right around the time I’m too decrepit to drive, and if it fails then, no huge loss - for me, anyway. :cool:

I follow, but that’s pretty reductive. Given the choice between fighting Chuck Norris and a TI-81 graphing calculator, I’ll fight the machine.

If the choice is rather between fighting a drunk Michael Cera and dodging a GPS-guided missile (or a land mine or whatever), I’ll obviously take on Cera every time.

Or the human might miss something the computer would catch every time. It’s not that computers are always better than people or vice versa—it’s not nearly that clear-cut. Remember that MCAS only existed because Boeing was trying to accommodate humans—they were trying to make the handling of the 737 MAX “feel” so much like that of previous 737s that the dynamics of the new plane aren’t perceptible to the pilots (who thus require almost no training on the new plane).

Boeing made huge mistakes with the way they rolled out MCAS, and I’m not saying that the whole debacle isn’t a failure of automation. But a fully-automated 737 MAX wouldn’t need MCAS at all. And why do you blame the computer for those crashes and not the humans who failed to see the flaw they were building into the system?

That is a lovely and romantic idea. It’s even true sometimes. But machines don’t get tired or hungover. Humans can be fantastically rigid and inflexible when it suits their interests.

What resistance? For that matter, what would resistance look like at this stage in the game, when the technology isn’t ready yet? Would it be governments squashing autonomous driving testing? Because the exact opposite is happening all over the place. If governments represent the people, then the people clearly want autonomous cars, because local governments have been rolling out red carpets for years.

Just to clarify: I’m suggesting that question of humans vs. computers is a false dichotomy. Sometimes humans are better; sometimes computers are better. But humans tend to prefer humans even when humans are worse.

Also, the set of actions at which computers are better than humans is only growing. Think about whether you’d have let a robot operate on your eyes in 1975–yet many people today get radial keratotomies to correct their vision, a surgery conducted primarily by computers and robots.

I used to design radiation oncology treatment machines—specifically, I designed the collimator to aim the X-rays. That radiation beam is guided by a computer, and that’s exactly how I would want it if I were being treated for cancer. I don’t want my oncologist to use the force—I want my oncologist to irradiate my tumor and not the surrounding tissue.

It’s true that software bugs have produced serious overdoses in machines like the one I worked on. But in aggregate, that risk is significantly lower than the risk posed by not adopting automation for radiation oncology treatments. And when automation is safer overall, switching to automation is worth considering. It may be politically or emotionally untenable, but that’s a separate question.

That will be challenging. Despite the 737 Max issues, improving on commercial aviation’s safety record would require stringing together years and years of no accidents. But perhaps it would be enough to equal it at lower cost. I couldn’t guess what replacing all the human airline pilots with automation would cost. Sounds expensive, but maybe it would be cost effective in the long term.

For this reason (and again, not because I fly for a living - I’d be willing to give it up and go back to Cessnas and Pipers for fun) I think we should go at cars first. So, so many road deaths. And we are never going to solve that problem through training and safety culture, which is how commercial aviation has achieved such good outcomes.

So I hope the lawyers, policy makers and manufacturers are able to solve the “blame problem” soon and get humans out of the driving business.

Oh, that will absolutely be challenging. We totally agree about that. I think it will happen eventually, but I have no idea when. I just think it will happen to cargo flights first.

The idea that it will never happen is silly—never is a very long time. It’s not that you’re arguing that it will never happen, but that’s the counterclaim one would have to make.

In a way, automated aircraft should be easier—yes, there are three dimensions, but it’s humans who have trouble going from two to three dimensions, not computers. And in some ways, it’s a lot simpler to teach a computer to dodge a flock of geese than it is to teach it to dodge a toddler running into the street after a ball. But there are plenty of challenging corner cases in aviation too.

On the other hand, the lessons learned from self-driving cars will certainly be applied to aircraft. So a lot of the expense—including algorithmic trial and error—will have been expended already.

Air travel is so shockingly safe that, even if I came up with a self-piloting system that somehow crashed at half the rate of US commercial airline pilots, it would be challenging to demonstrate that simply because commercial airliner crashes due to human error have become so incredibly rare in the last two decades.

For what it’s worth, my 11-year-old son wants desperately to become a commercial pilot. When he was six, he used to save pennies he found on the street so he could pay for “airplane lessons.” That just melted my heart.

I’d love it if commercial pilots were going to be the standard for the next five decades, if only for selfish reasons. I don’t think things will break that way, but if they do, there’s a significant silver lining for me.

I also question whether any cost savings of fully automated commercial air travel are worth it. At most, you’re just saving the $$ spent on pilot salaries - in the context of the many billions spent on fuel, the airplanes themselves, etc, is there really demand for it?

You may be overlooking how expensive it is to crash a plane and kill one or two hundred people. Look at MH370–the cost of the search dwarfed the cost to buy theto train its pilots, one of whom was likely responsible for the whole thing.

No, eliminating pilot salaries would not be the main benefit from a fully-automated air-travel system.

I doubt wanting somebody to potentially blame is not a big concern for most drivers. Most drivers probably think that they will never have an accident. Accidents are something that happen to other people.

But that mentality is probably what produces resistance to driverless cars. People might get to the point where they accept that a computer can drive better than the average driver - but they won’t believe a computer can drive a car better than they personally can. If you asked a thousand people to rate their driving ability, nine hundred of them would rate themselves as above average.

When driverless technology reaches the point that car manufacturers are ready to sell it, they’ll enter the market by targeting the elderly. This is a significant group of people who will admit to their own limitations as drivers and who are used to having the freedom of not being dependent on other people to drive them.

Pittsburgh rolled theirs out, rolled it back up, and replaced it with a cheap little runner. In short our mayor got enough complaints to listen even if the tech being developed for things like this is about the closest thing to a local industry (other than schools and healthcare) we have left.

As for me; put me down as one of those who wants a human body to blame. But I still want to own one of the damn self-driving things myself.