What’s “should” got to do with it? The reality is that laws made by actual humans do tend to distinguish between action and inaction. Which is not a definitive statement of human morality. But it is a pretty good sign that, in practice, we humans tend to view actions and inactions differently, whether you like it or not, whether we want to admit it or no, and so may experience different emotional and stress reactions to a system of morality that, in spite of how humans seem generally to have arranged things, refuses to distinguish between action and inaction.
My hypothesis: we tend to want people who haven’t taken steps to actively harm others to be able to go about living their lives, exercising choice, and not having choices made for them by bad actors whose actions might, under your moral scheme, compel the non-actors (people who never would have dreamed they’d have to make a life or death decision for half a dozen other people on any given day) to have to set aside their own agency and intervene on behalf of others because your desired moral system seeks to compel them to. In favoring a choice-based morality, you have diminished agency for all. Now even more people’s choices get made for them by the worst actors among us.
This sounds like you’re making the decision (to pull the lever or not) based on its effects on you and not on its effects on everyone involved. You’re opting for the choice where five people die but you don’t feel any responsibility for their deaths over the choice where one person dies and you do feel responsible. I (like @Left_Hand_of_Dorkness) don’t agree with this approach to decision-making.
The trolley problem itself is contrived and artificial, but I’m sure there are plenty of real-world situations where we have to choose between taking some action that will cause some harm to one or more people (even if that harm is just annoyance or inconvenience), and not taking that action, resulting in more overall harm than if you had taken the action.
Do we even know there’s an “evil villain” involved? Does the trolley problem specify how the people got on the tracks? Would it make a difference if the “trolley” were a mine car in a coal mine, and the people on the tracks had passed out from mine gas?
It’s not “analogous to code” this is all code and it’s written by a someone. The training does not happen with a leash and dog treats. Neural networks are less predictable than other programming techniques but the end result is still pretty certain given enough miles driven. It will end up with someone being killed sooner or later and the programmers decisions directly cause the car to kill one person rather than another in a trolley problem
That’s fairly unlikely for a human (most IRL trolley problem scenarios involve far too little time for a human to consider the ethics of what you are doing). When you can execute billions of instruction a second it’s much more likely for a autonomous vehicle.
And that’s choosing option A in trolley problem. If the thing you are heading towards is a speeding 20 ton semi truck then that’s a decision to kill the driver (who is the person buying the software) not whoever is on the other side of the road. If the thing you are hitting in option B is a cardboard box or a raccoon it will swerve to avoid the semi and hit them instead (as those would be weighted less during training). If the engineers decide to weight pedestrians and cyclists the same as a speeding semi truck. That is absolutely a programmer deciding to kill one person rather than another. I’ve no idea if that’s how they’ve programmed it, maybe they setup the weights so it will always avoid the semi truck even if that means killing other road users. But its definitely happened already, there are autonomous cars driving round with those weights right now, and sooner or later one of them will use them to decide option A or option B in a trolley problem.
I’d argue that if you to slow to 5mph for a gentle bend with poor visibility you are much more likely to cause an accident by someone rear ending you than a encountering a sudden immovable object (hell for all intents and purposes you are creating a sudden immovable object for the next person going round the bend)
I’d argue this is one is the fundamental ethical insights provided by the trolley problem. What is the logical justification for this? Why does positively acting carry a different moral weight than passively failing to act? (Given the thought experiment premise where you are 100% sure that five people die in option A only one in option B)
Surely that’s just an arbitrary moral code no more logical than not eating pork?
The problem with the trolley problem and saying you can’t get rid of the problem is that a society with lots of trolley problems sucks. It’s why we make laws, to get rid of trolley problems.
Is it, the idea that action is viewed differently from inaction, or is it simply how humans have evolved? Such that we feel differently about harm caused by action as opposed harm allowed by inaction?
Don’t mistake the absence of a considered choice as “arbitrary”. Or at least don’t understate just how hardwired our “arbitrary” makeup can be. We don’t choose what we feel, we just feel.
I’d say this is taking the thought experiment too literally. It’s meant to be about analysing ethics and utilitarianism not guiding public public policy. IRL you very rarely have the kind of certainty described in the thought experiment (even in the literal situation described in the dilemma) to the ethical dilemma is completely different.
The exception is the autonomous car decision making discussion and in that case yeah absolutely they need to be putting far more effort into avoiding trolley dilemmas.
I understand and respect that, but I have to somehow live with whatever decision I make, so it has to be based on my feelings about and interpretation of the situation. Now, if the one person was me, that would make it a lot easier because I wouldn’t be killing an innocent 3rd party, I would be making the decision to sacrifice myself.
What’s so hard with admitting that the simple statement “No-one’s ever going to write a line of code choosing who to kill” is correct? Are you dug in too far now?
Training a system is not coding, and no-one is going to train a system to deliberately kill anyone anyway (apart from military apps, obvs, we are talking in the context of self-driving cars).
If you choose to interpret it that way, then embedded software has been choosing trolley problem solutions for decades. Lots of safety-critical machines have a “when all else fails” behaviour that might still result in harm to someone but is the safest approach in the main.
Yet you will not find any code that deliberately sacrifices one life for another.
How do you think neural networks are trained doggy treats and spray bottles?
Because it’s not. It’s 100% definitely incorrect.
There are actual autonomous cars on the road right now driven (mainly) by neural networks. Someone wrote some code to train those neural networks. They did that by assigning weights to real life situations that decide how likely the car was to avoid them. The computer doesn’t assign those weights a programmer does. They can write their code to do option A:
A cardboard box has weight -1.
A pedestrian has weight -1000
An oncoming semi truck also has weight -1000.
This will mean (on average, this is obviously a massive simplification) the code they just wrote will kill the driver given the option of swerving to avoid a semi truck but hitting a pedestrian.
Or they can write their code to do option B:
A cardboard box has weight -1.
A pedestrian has weight -500
An oncoming semi truck has weight -1000
This will swerve to avoid the truck but kill the pedestrian.
Obviously that’s a massive simplification but but that code has absolutely been written and is driving round American roads right now.
Yeah those are two independent attributes. Each corner has a gentleness and a visibility. You can can very sharp corners with good visibility and very gentle corner with very poor visibility. That’s how corners work. If you slow to 5mph on a gentle bend you are putting yourself and anyone in the car with you in danger.
You’ve used that one. Probably try to think of new jokes?
Nope. Weightings are the result of training; if a programmer was just inputting the weights there would be no point in running the training.
But sure let’s imagine a programmer manually sets the weights. In that case it would be like your option A: any injury to any human is unacceptable.
In a (extremely contrived) hopeless situation, the system should just stop the car as quickly as it can. We can make this simple: imagine we just feed in sensor data to the car to tell it that suddenly there are children / trucks 5 metres away in every direction. What should the driver do? Answer: just press the brakes hard.
What it shouldn’t do, and no licensed car AI would ever do, is choose who to hit. It’s just not how any safety-critical system is ever implemented.
They are also used as input. How does the neural network know it should be avoid pedestrians and semi trucks, rather than attempting to run over as many as possible or crashing with the most spectacular possible explosion? It can’t work that out in its own(neural networks do not really think, or know things, it’s all just matrix multiplies) it needs a programmer to weight one outcome over another.
But that’s not an option. Hitting a semi truck will certainly injure or kill the occupants of the car (who are probably the ones paying for the software). So by selecting option A (weighting the oncoming semi truck the same as the pedestrians) you are choosing to kill someone just as much as option B.
But it does. In your example if the children were replaced with a large cardboard box, the car would choose to swerve to avoid the semi (the sensors absolutely know the semi is heading towards them and the computer can work out the only way to avoid it is to swerve) and hit the cardboard box instead, saving the occupants of the car.
So the programmer has decided that a cardboard box is more favorable to collide with than a semi truck or a child. Similarly when they decide a child is equally as unfavorable to hit as a semi truck (so don’t favor, one or the other) or decide the child is less unfavorable (or more unfavorable so the car will swerve out of the way of the child if they can’t brake, even if there is an oncoming semi truck they are swerving towards). That is a decision to favour killing one person over another. That decision has been made by a programmer at an autonomous car company.
I don’t know which way they went. You could argue (and maybe their lawyers did) the moral prerogative is to keep the person buying their software alive, so should always choose the option least likely to kill the occupant. But they made the choice. And the software that resulted in driving round American roads as we write this
I’m sure they have a reasonable idea (not 100% accuracy but typically, image recognition is really good nowadays), and they will tend to avoid them. But less so than a child or an oncoming semi truck.
So there is some code somewhere that says:
Cardboard box: -1
Child: -1000
Oncoming semi truck: -1000 (or maybe -900 or -10000?)
Which will be used for to train the neural network to avoid boxes but not as much as children or trucks.