Unethical scientific experiment?

If memory serves, Lawrence of Arabia choose fatal crash over running down pedestrians.

Not that choose is the right term for split second action.

Yeah I’ve no doubt that plenty of people are discussing the problem. Like I said earlier; I estimate something like 70% of discussions on autonomous cars seem to allude to this dilemma, as if it’s something that will happen all the time.
What I’m saying is: not only are such situations incredibly rare but are best dealt with by various rules of thumb that prioritize avoiding any collision and then reducing the severity of any impact. There simply won’t be any “should passenger or pedestrian die” choice in the algorithm. It’s an “angels on the head of a pin” discussion.

It’s rather like the pilot choosing whether to land in a built up area.
Of course in real life you wouldn’t want to land in a built-up area even if you knew no people were there, as built up areas have buildings, (parked) cars, fences, lampposts etc.
Doesn’t stop it being a very common moral dilemma example.

Exactly. And that’s just like I said in post #14; human drivers don’t make any “him or me” decisions because there is no time to. We just try to avoid a collision.
The difference with the autonomous car is that first of all it won’t speed, and secondly it will be able to pick the safest swerve direction.

When you have 2 minutes left to live, let me know if you think about the houses…

…not really sure what your point is…
That I would deliberately fly into houses, killing myself, as well as any occupants in the process?

I’ll help you out here, since this is getting painful.
Imagine I had two options only; flying into a derelict building (so I know there are no people inside), or into a field where I can see a huge crowd has gathered (and there’s no way to avoid the crowd).
And furthermore I have time to fully appreciate this situation, but no time to find any other choice than these two.

I fly into the building and sac myself. So sure, with enough contrivances we can create this dilemma.
But good luck finding a real life example.

I can’t imagine that driverless cars will operate at the level of sentience necessary for the ethical dilemma to make sense. They will make decisions based on a number of accident-avoidance heuristics. When they fail the outcome is likely to have more to do with the unique circumstances of the failure than any decision based on right or wrong.

Oh, and anything Clarkson says is best viewed with deep suspicion.