Self driving cars, kill one of me or two of them?

Same here regarding the bolded. That swerving boom truck example was especially egregious – a defensive human driver has that boom truck on their radar early on. Your take on Page 1 of this thread was right on – unexceptional defensive driving (as employed by human drivers) avoids the boom truck issue pretty handily.

See my post #96 for a summary (sorry I can’t link it myself because of the type of VPN I’m using at this instant)

No-one is going to argue that a self-driving car is immune to any and all threats. I’ve never said any such preposterous thing. It’s a physical object in the real world.

The point I was saying was simply that all the commonly-cited examples e.g. “blind bend on a mountain road”, “cresting a hill on the freeway” or “cut up near a parked car on the hard shoulder” implicitly rely upon the self-driving car making a bad decision first, to even find itself in a dilemma situation.

It’s fascinating to me that this dilemma is discussed so frequently, yet not one good example has come up of how it could happen to a cautious, alert driver.

I’ve seen stories like this before. Admittedly this isn’t the best source, but given it’s possible to do, I think it fits the discussion. If a mannequin is dropped onto a highway - current gen classifiers that can run on hardware in a practical car are not going to reliably differentiate between a mannequin and an unconscious person. Humans can’t do it, obviously. And by being dropped into a live lane, there is no possible way for the autonomous car to brake before hitting the mannequin. Assume a 1 lane, divided highway. There are concrete pillars at this overpass.

Do we a. Run over the possibly alive human
b. Steer into the concrete pillar, impacting at potentially 50 mph (we were going 70 and the emergency braking starts as soon as the car sees the falling object)

Both choices, in the mind of the car, will potentially kill someone.

Note that the mannequin might only have a 70% chance of being a human (classifier might be sensitive enough to notice some of the differences in the way a mannequin falls versus a human) and there might be only a 60% chance, roughly, of a typical adult being killed if the car impacts head on a concrete pillar at 50 mph. (the car will have statistical tables it can reference)

So by multiplying out the math, either outcome has a similar risk of killing someone.

There’s an AI error as well. The AI was following too close to be able to stop by braking straight. It had to swerve. That is a failure, not a good thing. As I said in a previous post, that sequence of near misses shows the AI driving quite poorly from a defensive driving perspective (sitting next to a large truck for instance).