Reason for wanting human controlled vehicles = having someone to blame

There is another major issue, though. Imagine that your city has set up a network that allows autonomous vehicles to communicate with each other, and has established a set of automated traffic protocols such that all traffic lights in the city have been eliminated, but your car must use the network if it is inside the full-automation perimeter of the city.

You are riding down 8th, toward Trimble Ave, when suddenly your car comes to a stop for no reason. A car comes up 8th very fast, and a short while after that you car starts moving again – at least, until it stops again to give way to the phalanx of police cars going after the bank robbers in the one car, who had hacked the network to facilitate their escape.

Imagine how much trouble hackers could cause, with very little effort. A thing like mh370 could be carried out without even harming the perpetrator. The fact that tricking automated systems so far tends to be technically easier and more reliable than tricking a human seems a bit worrisome.

In terms of automation under normal operating conditions, flying is simple and features large margins for error, and driving is extremely complicated and typically features very narrow margins for error.

The autoland feature on commercial aircraft sounds impressive, with its ability to safely bring a widebody aircraft down on a runway in zero-visibility conditions with no aircrew intervention…but it’s really not. It’s flying a straight line, guided by an ILS beam and a radar altimeter. It’s not trying to “see” the flying environment, and it doesn’t have to interpret complex 3-dimensional visual/radar data to identify the positions, speeds, and intents of mobile and stationary objects. It doesn’t have to deal with cross traffic, pedestrians, animals, potholes, curves, road construction, questionable traction, distracted drivers in other vehicles, disabled vehicles in the middle of the road, traffic lights that may or may not be working, half-worn painted lane markings (or lane markings completely obscured by snow/ice), or road debris. And as already noted, even on auto-piloted aircraft there’s still a meat-based pilot keeping an eye on things, ready to intervene if the autopilot starts behaving erratically, and problems are likely to be identified while there’s still adequate time for successful pilot intervention. Autonomous-car advocates are propelling us toward SAE level 4 or level 5 automation, in which vehicle occupants are not expected to maintain any kind of vigilance.

Incidents like this one in which an Uber test car struck/killed a pedestrian are the reason I’m wary of riding in an autonomous vehicle. You get lulled into complacency and then a situation comes up that requires human intervention, but margins are so tight that by the time a meat-based driver recognizes that there’s a problem coming up that the autodriver isn’t dealing with (e.g. we’re about to hit a pedestrian, or we’re about to submarine under a semi trailer at 60 MPH), there’s no longer enough time to act.

In theory, if the average autonomous vehicle has a better performance record than the average meat-based driver, then widespread implementation ought to reduce overall motor vehicle fatalities and injuries. I used to know a person who was a below-average driver; I rode with her a couple of times, and made it a point to avoid doing so after that because it was damn scary. She (and the people around her) would probably be safer if she used such a vehicle.

Me? Pretty sure I’m significantly better than average. My safety and the safety of those around me would suffer if I surrendered control to an autonomous car that is only slight better than the average driver.

Most drivers believe they are better than average, even if it’s demonstrably not true. Such people will not want to surrender control to an autonomous car that is only slightly better than the average driver.

If people are going to trust autonomous cars enough for widespread adoption, they’re going to have to be really amazingly astonishingly freakishly good. The only accidents that are likely to be forgiven are the kind where no human could ever have possibly avoided the same accident, e.g. another car popped out of a blind side street at speed and it was physically impossible for your car to avoid a collision. As long as autonomous cars continue to regularly have the kinds of accidents that shitty drivers have - like mowing down pedestrians in plain view on wide boulevards with no other traffic - they won’t be trusted.

They will also target insurance companies. Namely working with insurers to offer car owners highly discounted insurance for driverless cars.

They might not believe it, but you can be sure the insurance company will believe it, if it’s true, and will price insurance accordingly. Your human driven car will not only fail to give you the benefits of a self driving car, but will cost more to own.

The reason I believe self driving cars are inevitable is that it is possible to prevent them from having the same kinds of accidents as shitty drivers. Once that’s done, it doesn’t need to be done again, there isn’t going to be a new crop of shitty AI taking to the streets every year who have to learn how to drive.

Maybe someday in the future, but not yet.

If every vehicle was self-driving, I might be OK with the idea, but as long as even one person on the road is driving, I’m reluctant to cede my control of my car. And, regardless of how reliable computers may be, they can and do still fail. The one that controls my washer failed, but no one was at risk of injury or death when it happened. The one that controls my husband’s power steering failed, and it made it more difficult for me to turn the vehicle, but I was still able to get home safely. Can anyone promise that the same can be said for a self-driving car?

I’m not anti-technology, but I don’t trust in its reliability or infallibility.

Guessing maybe you’re talking about Uber? But they pulled out of everywhere after they killed a pedestrian, AFAIK that wasn’t a government action but an internal company decision. Regardless, they haven’t really gone anywhere. And autonomous vehicle development is still going strong in Pittsburgh, based on a quick read of the news:

5 companies testing 55 vehicles in Pittsburgh

[Pittsburgh based] Argo AI still testing as of this year

Pittsburgh’s rules for autonomous testing less stringent than Cali
If this is what the industry looks like when people are supposedly reluctant to adopt autonomous vehicles, then I don’t know what it would look like if they were eager.

Is it? Geese can also travel in three dimensions & in unpredictable ways & when things go south in a hurry a human is better at coming up with a Plan B than a machine is because many emergencies, by definition aren’t going to be programed into the computer’s ‘brain’ on how to handle them. Look at Sully & the Miracle on the Hudson. He had mere seconds to evaluate all possible landing spots & he managed to pick what was likely to be the only one that not only didn’t kill his passengers & any innocent civilians below. From the tapes, he said he couldn’t make it back, he couldn’t make Teterboro (closer); he then eyeballed & picked the only open area where he wouldn’t kill a bunch of people in the biggest & most densely packed metropolis in the US.

Now think of all the airports a commercial plane might fly from & all of the runways at those airports, & all of the directions (because sometimes you get unusual weather which means planes takeoff/land in the opposite direction from normal), & all the variables that a bird strike even 30 or 60 seconds earlier/later would make in alternative landing sites (There’s only the GW over the Hudson, there are many bridges over the East River making that a much tougher option to thread your way over under them) & you’ve got a pretty big database of options that a computer has to choose from.

I [del]believe[/del] know I’m a better than average driver. For routine driving, on a well-paved & marked road in good weather condition I believe a self-driving car would be better than me as it won’t get fatigued, hungry, or distracted. However, there are too many cases where I believe I’m better than a computer, including:
[ul]
[li]Rain/flooding[/li][li]Snow[/li][li]Construction/lane shifts[/li][li]Pothole avoidance / squirrel non-avoidance - You’re safer making a squirrel pancake than trying to swerve around & swerve back to avoid hitting it. The same is not true if the creature is a moose/cow/bear. [/li][li]Any type of off-road driving, including grass parking lots.[/li][/ul]

One of the concepts I’ve heard of for SDCs is that when it encounters a situation it can’t handle is that it pulls over/stops & says, “Hooman, take over”. If I don’t do 90% of my driving, my skills will deteriorate. A teenager will never learn those skills. A person coming home from a pub crawl won’t have those skills due to inebriation. This is what scares the 'ell out of me with SDCs; that it’ll barf & leave you stranded; most likely in severe weather where it may not be safe to be out in the elements (ie. walking home).

I saw video of that accident in AZ. Even if the driver had been paying attention (which is tough to do when it doesn’t matter) & even if the attendant/driver was a world-class race car driver seeing the obstacle (pedestrian) doesn’t mean you take action as you’d expect the car to do such. By the time you realize the car isn’t going to act as expected (swerve or brake) it’s probably too late for the human to do such.

Typically computers that control things with life-or-death consequences run in parallel,so that one can fully fail and the other(s) take(s) over. This has been the case in aircraft for a couple decades now. And of course, with multiple computers, the chances of them each failing at the same moment is remote.

Which reminds me of the Woody Allen joke: “The odds of a bomb being on your airplane is a million to one. The odds of two bombs being on your plane are a million times a million to one. Next time you travel, improve your chances and take a bomb.”

I gather you’re unfamiliar with machine learning and its application in autonomous vehicle development. Human programmers don’t have to anticipate every eventuality and then write code for all of them. Computers can “teach” themselves to handle situations a human programmer may not have explicitly anticipated.

IMHO, the “miracle” you’re referring to isn’t Sullenberger’s decision to ditch in the Hudson—that was fairly obviously the least-bad option among terrible alternatives. The miraculous part was that, with enormously high stakes and no second chances, the guy nailed the ditching on the first try—and without hitting boats or anything else in the process. It didn’t hurt that Sullenberger had plenty of hours in gliders. Naturally, I’ll defer to the board’s commercial pilots on this, but I suspect most pilots would have attempted exactly the same thing under the circumstances.

Besides, anyone developing a fully automated plane will definitely consider situations in which ditching becomes the best option. There’s nothing singularly human in the decision Sullenberger made. I don’t quite understand why you seem to believe otherwise.

Yeah, and computers are notoriously terrible at dealing quickly with large data sets. :wink:

The people developing self-driving cars are beginning to share machine-learning datasets so that each company’s algorithms can benefit from very rare or otherwise unusual circumstances. This is roughly analogous to putting every human pilot in a simulator and having them replicate Sullenberger’s ditching—except an algorithm’s memory doesn’t fail under stress or fade with time.

I’m definitely not saying that vehicle automation is perfect or even ready for prime time. But it seems absurd to argue that it will never be ready. And if you concede that this is coming eventually, then we’re just quibbling over the schedule.

Neither do the people developing this technology. To expand on what Ravenman said, we engineers think about failure and failure modes a lot.

We anticipate all the failures that we know have happened and all the failures we can think of that haven’t happened. There’s an entire engineering specialty dedicated to just this sort of analysis.

It’s a highly imperfect process, and Boeing’s deeply flawed MCAS implementation is an example of how failure analysis can go subtly (and profoundly) wrong. But yeah, we’re aware that computers can fail and we’ve actually given some thought to how to deal with that. The reliability of ECUs and other critical automotive electronics is handled very differently from the reliability of your washing machine’s micro controller.

That said and for what it’s worth, I think your general skepticism is pretty reasonable.

So I’m a child who needs to blame someone, and a curmudgeon, and a luddite because I want a meat brain in control of machinery? That’s ridiculous. How about–if it ain’t broke, don’t fix it?

I’m on board with traction control, inattention alarms, crash avoidance braking, even automated parallel parking because those are definitely problem areas in driving, and represent situations that a reasonably good driver can avoid anyway. After all, even the best driver can have a bad day so the assistance is useful or untriggered. But apart from having an electronic copilot, what is fully autonomous tech offering? A bit of inflexibility, maybe some limited human tragedy while the bugs get discovered and worked out of the completely unhackable control system? Yeah, no. I don’t believe we need another layer of tech that is only understood by the creators and by those who would intentionally crash it. Besides, good luck finding a manufacturer who doesn’t weave an absolute immunity clause into the sales contract: “You wanna use our automated car? Fine, just agree that using the automated feature is totally your option, and to hold us harmless if it crashes.” Hey, nobody’s making you buy that tech, you can volunteer not to if you don’t like the terms.

35,000+ deaths per year suggests that maybe it’s a little bit broke. It’s better than it used to be, but that’s still a whole lot of death to accept in a system that’s working just fine.

Yes, I agree 100%. That is exactly what I’m talking about. If you are seated at the controls but you’re expecting that the car is supposed to do the driving, then when the car finally does screw up you delay your intervention until it’s very obvious the car is screwing up, i.e. until it’s too late. And I am citing this as an example of why people don’t trust AVs, at least not yet.

The current alternative to that is a meat-based driver who is cognizant of the fact that he/she is responsible for operating the vehicle, and is watching for stuff like that. in the AZ Uber pedestrian crash, if it had been a meat-based driver piloting a conventional non-autonomous car, the driver would have known that they were responsible for immediately swerving/braking as soon as the ped came into view (as opposed to waiting to see if the AV takes action, or not even paying attention at all because the AV has been “pretty good” for the past 30 miles); it likely would have become a lower-speed non-fatal collision, or possibly even a near miss.

As far as I can tell, you’re the only person in this thread who is saying any of that about you.

Okay. But those things are coming whether you believe we need them or not.

Do you fear vehicular automation (or at least dislike it) partly because you don’t understand it? Your post suggests that might be the case.

If not, why don’t you feel the same way about lane keeping, cruise control and similar driver-assistance technologies? Do you understand the inner workings of those? They’re different in degree but not in kind.

For the record, I find the idea of a ‘backup’ human to be ridiculous. If a car is actually automated, it should not have human controls at all, except for potentially a ‘crawl’ mode where you can direct the car short distances at low speeds using a keypad or joystick.

For testing automated vehicles, you need to have controls and a person who is actively engaged in the entire trip. That means hands on the wheel, eyes on the road, if you need to have cameras to enforce that, have cameras to enforce that.

We are currently at that awkward stage where cars can almost drive themselves, but that’s like saying a robot can almost perform heart surgery.

Trust me, lawyers will never run out of people to blame. If a purely computerized automobile fails, then the company responsible for providing the failed component will be liable.

The potential for millions of lives saved worldwide over the next couple decades, many many millions of injuries eliminated, and giving people more free time to do things they choose to enjoy. Working autonomous vehicles would be as big a revolution as the invention of the car itself.

OP: Title rephrased = People want humans at the controls so there is someone to blame when something goes wrong. Blame = childish & punishment-driven, desiring some amount of suffering on the responsible party. It is why we, allegedly, want an “easily identified person” to answer for an accident. Per the OP, we want a meat target and not a machine at which to direct our disappointment. Meat and machine can both be educated to avoid repeats, but only the meat can suffer.

Plenty of references to people having an aversion to new tech = luddite

And curmudgeon was your own word, used in reference to those with an aversion to automated controls.

All terms are aimed at people not embracing this tech, I am among those resisting it. Just because you “can’t tell” doesn’t mean it’s not true. In this case it evidently means you should read the thread before taking a swipe.

Perhaps. Perhaps not if enough people can demonstrate to those trying to cram this murderous tech down our throats that it is not, in fact wanted. Corps have better things to do than chase marketing dead-ends.

:rolleyes: Eh, no. I embrace plenty of stuff I don’t understand, it’s how I function in modern society.

That stuff’s cool because I can turn those features off if I don’t like how they work. And I have, because I don’t, because they do what I feel to be inappropriate & dangerous things most of the time and add another layer of crap for me to pay attention to while I’m driving.

I was trying to be polite, but maybe my phrasing was too subtle by half. I’ll restate things unambiguously:

Your earlier post was stuffed to the gills with straw men. No one—not even me—is calling you any of these names. My “curmudgeon” comment wasn’t aimed at anyone in this thread, and even if it was, I made that comment before you posted in this thread. I couldn’t have been referring to you.

Our opinions on this subject (in the IMHO forum) diverge substantially, but the name-calling you mentioned is imaginary.

This cannot be understated. the point of an AV is not to have an electronic copilot, it’s to have an electronic chauffeur. Even something as mundane as going to the grocery store changes significantly with AV.

No looking for a parking spot. You get dropped off at the entrance, you get picked up at the entrance. No dodging carts because they never enter the parking lot. No dodging cars in the parking lot, no walking in the parking lot. Perhaps no parking lot, parking can easily be shared by many stores at a nearby but less desirable location. No fumbling for your key in the rain, you’re picked up at a weather sheltered location, just like everyone else.

Hell, maybe you buy your groceries online and just send your car to go pick them up for you.

The potential for technology like this is immense. World changingly immense.