Reason for wanting human controlled vehicles = having someone to blame

Cars ARE the murderous tech.

And I should add – not all the consequences of autonomy are going to be good. People will lose jobs driving cars. It’s quite possible that traffic gets worse, if people decide that ridesharing in an autonomous vehicle is preferable to taking public transit. But to question whether autonomy would make any difference at all? Preposterous. If/when it works, it upends a lot of things about how we function in our daily lives.

Exactly. And anyone working on autonomous cars will tell you that they’re doing so partly to make automobiles less murderous than they are right now.

And Ravenman is right: there will be some negative consequences to self-driving cars. People will still die occasionally in car accidents; some people will lose their jobs and have a hard time finding new ones.

But fewer people will die than die now; new jobs will be created. The claim is not that self-driving cars are an unalloyed good with zero drawbacks. Rather, the claim is that, on balance, we as a society will come out ahead.

Well that’s not true, either. Apart from the very rare parts failure, I’d have to say the meat processor is what determines whether a car becomes murderous. Otherwise they just tend to remain motionless and quite undangerous. I dunno, maybe I’m taking a page out of the Amish playbook and just arbitrarily deciding what I think is an appropriate level of tech. I mean horns, headlights, safety belts, radial tires–that stuff’s alright. Cruise control, automatic transmissions, stability control–that stuff is highly suspicious & questionable if it’s necessary. Automation is straight up English.

EdelweissPirate, this is escalating needlessly. I do not concede my points, but I was not using my polite voice and that was unhelpful. I still think it is irresponsible to market this until there can be a 100% certainty that it can’t be hacked remotely. Also, because I don’t see any signs of such certainty, and because auto manufacturers have demonstrated time and again their willingness to roll the dice with driver safety, I don’t think my cynicism rises to strawman level (there may have been one or two, but that wasn’t one).

“100%” is an unrealistic performance standard for anything. There must be some non-zero level of hacking risk you are willing to tolerate, if the vehicle as a whole improves your safety to a level beyond what you currently enjoy.

Exactly. People are unreliable.

Computers can be unreliable as well, but we actually have the ability to change computers, and make them more reliable. 100% isn’t going to be possible, but your meat processor isn’t 100% either, given the tens of thousands killed (and many more injured) every year. I wouldn’t expect people to switch until computers are an order of magnitude (or two) safer than humans.

In 35+ years of driving I’ve never had anyone sabotage my car to the point where it could not stop, or steer away from oncoming traffic or obstacles. I’m sure brake lines get cut from time to time, but I can’t say that I know anyone who has been the victim of that kind of sabotage. I think I’ll stick with a number so close to 100% hack-proof as to be essentially 100%.

That might be part of it, but the real reason for balking at driverless cars is simply that neither the technology nor the infrastructure is in place to make them work safely.

I mean this respectfully: I don’t think “100%” means what you think it means.

(Given your username, I just couldn’t resist the Princess Bride joke. I’m on-board with de-escalating).

Your current car isn’t 100% hack proof, as you rightly point out. I think what you mean to say is “hack-proof enough that I don’t worry about it very much.” And I think that’s a reasonable expectation on your part.

Is that a fair description of your position, or have I mischaracterized it?

Regarding the OP and the concept that people want to have a person to blame, I see this as an issue that will be resolved by time and marketing, not technology. As the technology improves over time, the populace will age and die off, being replaced by younger members who do not have the same biases. There are a plethora of widely held beliefs that have been abandoned over the last few decades, I feel that the predominant resistance to AV will follow the same path. There will always be risk, and there will always be resistance. But the inflection point will be reached where the risk is minimized, and the resistance is minor.

Consider these
Old: Gays can’t marry, that’s absurd!
New: Gays can marry.

Old: Don’t share your information online, bad people will harm you.
New: Alexa, we need more cat food.

Old: Don’t get into a car with a stranger!
New: Your Uber drive will arrive in 4 minutes.

Argue technology all you want, that is not what will impact the trend of overall acceptance. Yes, individuals will still resist (Hi SDMB posters!) but their numbers will shrink into insignificance.

There are a lot of cars out there right now that can steer, brake and accelerate themselves and nobody has sabotaged one of them to cause an accident. Thousands of cars, zero hacks. How much more hack proof do you want them to be?

Yes; we agree about that almost completely. What makes cars dangerous is mostly the nut behind the wheel. So I’m surprised that you’re so deeply skeptical about replacing that part with another, less nutty part.

I think it’s reasonable and healthy to have reservations about driverless cars, for sure. But I’m surprised that you’re arguing against driverless cars when we agree that drivers are the problem.

Wait—you seriously believe that automatic transmissions are “highly suspicious?” That’s hyperbole, right?

For what it’s worth, stability control is a huge step forward in safety. Few people realize this, but it’s a big deal. That link shows a ~30% reduction in fatal single-vehicle crashes for cars and a ~63% reduction in fatal single-vehicle crashes for SUVs. The improvement for non-fatal crashes is slightly better than for fatal crashes. Seriously, stability control saves a lot of lives.

Given that, why are you so suspicious of stability control?

Don’t forget in-vitro fertilization. Many people were very cautious when Louise Brown was born, and more than a few worried that “test-tube babies” presented some kind of serious moral threat. Now, no one blinks.

I’m sure you’ve heard the transcripts. The controller offered him priority to come back; Sully declined, too far away. The controller then offered the next best airport option - Teterboro; again Sully decided it was too far to make it there. Trying & not making it would have been disastrous. He went for the best option, ditching (with the realization that the plane would be toast). Do you really think that some airline is going to allow that as an option on their $9-figure asset; to trash it? Sorry, but I don’t trust them to do that, & given my company has an office near Teterboro, I could very well be underneath that plane when it belly flops.
If there was a boat in his way, he could have seen that & attempted to adjust flaps to get just a little more distance. I don’t see installation of the necessary radar/sonar(?) for a once-in-a-generation event. The cost benefit isn’t there.
What if the strike happened just a little bit earlier. The Hudson is much easier to glide into than the East River with it’s dozen+ bridges. Again, is the logic going to be built into a plane to go over one bridge & then under another, all at appropriate min/max heights for something that will probably never occur?

I’m saying we’re at the 80/20 rule, or maybe even the 90/10 one but it’s that last bit of unusual driving that confounds the SDC & that’s where it can be life threatening. To get you to there & then barf is, IMHO, worse than not taking you at all. What is a skill-rusty driver, or a teen that never learned to drive going to do in that situation? Would they back up to get to higher ground as the waters continue to rise? I’ve done that numerous times in the ambulance, parked at the then-edge of the floodwaters, then backup, more than once, as they continue to rise.

I live in an area that is prone to flash floods. I have driven across roads with running water in them; however, I can tell you that before I even considered it, I stopped & assessed the conditions - Could I still see the yellow line? If so, it’s only an inch or two & still safe enough to drive across. If not, it’s a no-go! What about the guardrail on the side of the road? When I saw it was getting ‘lower’ I realized that water was too high to drive into & turned around to find a different road. What about when there’s a couple of inches of snow on the ground? I can kind of make out where the curb is by the slight curve & extra height at the side of the road, or a fence/tree line that I know isn’t in the road. Even something as simple as intentionally driving over a grass field to park at some festival? There’s one festival that I go to every year that brings in a bunch of heavy construction equipment in case it rains. From past experience, the only thing that can get a stuck car free is a bulldozer. Is a SDC going to say you’re not on a road, you can’t drive here? Will it direct me to where the parking attendant wants me to park even though there on no markings in the field?

Why is hack resistance the metric of interest for AV cars for you? Shouldn’t the metric of interest be whether it actually makes you safer than if you drive yourself? Hack resistance is certainly a part of that, but it’s not the only factor.

To “hack” a normal car, you have to put your hands on it–and that involves overcoming any number of obstacles like garage doors, barking dogs, nosey neighbors, etc. Then you have to do your do (loosen some lug nuts, shave a brake line, stick a banana in the tailpipe, whatever) and get out before being discovered. And you can only do one car at a time. So no, I honestly never think about it. Alternatively if you can convince the car to allow your special firmware update you can cause some chaos; and if you can slip a bug into a firmware push you can compromise thousands of cars, potentially getting them to all execute a 90 degree left turn at the same time. I couldn’t do that. But I’ll bet China or Russia (just name dropping some current tech ne’er-do-wells) could do it, or a very bored teen. So given the stakes and the insidious nature of the attack I think it needs to be better than it is now, by a lot.

That’s paranoid until it isn’t. These days it’s not even a matter of “if” or “when” your data will get stolen, but how often. And that’s just for financial gain. If you could seriously mess up millions of people and ruin traffic nationwide at the push of a button, why wouldn’t you?

(And yes, hyperbole)

I suppose I just have the mind of a villain. In short, I see two things: 1) any computer system can be hacked and controlled, 2) there is no crime so heinous that no one will try it. If I’m going to get smashed by a car, I’d rather it be because some otherwise benign screwball was texting as opposed to some less benign screwball doing it on purpose. Really, I even have preferences for getting smashed by a car.

But we know for a fact that the meat processor is extremely flawed, and if it were a consumer product, would be recalled and banned for causing an extreme level of threat to everyday life. And we also know that it is cars that do the killing, because people “motoring” around by having their hands in front of them on an imaginary wheel, making “brrrrrrrr!” sounds with their mouth, and perambulating up and down sidewalks – even while drunk – do not cause traffic accidents.

Oh, sure, that’s definitely true that auto makers aren’t there to make streets safe. And yet, we don’t look at an airbag and say “WHY ARE YOU TRYING TO KILL ME WITH YOUR ROCKET-POWERED DEVICE AIMED RIGHT AT MY FACE IN MY STEERING WHEEL?” Furthermore, right now is probably the most dangerous time for technology related to autonomy: it’s immature, relies on humans being able to jump in at the last second to avoid the inevitable errors that cars will make during this testing phase. Meanwhile, consumers hear things like, “Whoa, Tesla invited Autopilot! I’m going to take a nap on the freeway!” and we get this combination of immature technology and over-reliance on it. It’s a very dangerous combination.

I think real autonomous cars are like 10 years away, and it’s going to be a hard thing to do well. We shouldn’t bank on the technology now, but it’s hard for me to envision a world of 2035 (or some point in the more distant future) in which autonomy isn’t a widely deployed technology.

Autonomy doesn’t really have any more risks of hacking than current cars - it’s the fact that there are computers and wireless connectivity. These guys wirelessly hacked a Jeep Cherokee four years ago. It was not a self-driving Jeep.

ETA: In fact it is funny: fixing the vulnerability required mechanics to lay their hands on the vehicle.

Yes, I really do. First of all, the plane is insured and the airline may not even own it (many commercial aircraft are leased). Everyone involved is fine with the plane never flying again. Passenger deaths are often wildly more expensive than the lost equipment. Do you think Sullenberger was censured for ditching his plane?

Even if the airline were run by overinsured sociopaths, they’d strongly prefer zero deaths—or absolutely as few as possible—simply because they lose more ticket sales if people die.

So yes, I really think the American would be fine with software that can do what Sullenberger did. I’m fact, I imagine they’d pay handsomely for it.

You’d be right if landing were a once-in-a-generation event. But it’s not. Because autonomous aircraft will need to detect runway incursions, they’ll likely have radar just for that purpose—and a random boat on the Hudson is going to look a whole lot like a random catering truck on a runway to the plane. (Ground radar is currently used to detect runway incursions, but putting radar on a plane is hardly challenging).

You seem eager to declare these things insoluble, but they’re either solvable or already solved. For example:

Yeah, it is. Seriously, validated 3D maps are absolutely a thing. Google has some and avionics companies have their own. Why wouldn’t an automated plane use those data to put itself down in the safest way available?

Because maps are not in real time. You could end up landing at Woodstock 12 or some such event.

There is no way we are at a point where cars can reliably drive themselves. There are too many variables that lack coding. What would be very useful is a system where computers are able to link to the cars around them and augment braking to avoid pileups. THAT is a practical use of the current technology available.