Free Will Zombies Should Be Punished

Ah, I think I understand what you mean. I think then that the difference between the river and the person is that we have to restrict the river physically, whereas with a person we can threaten to restrict them. In terms of curbing behaviour, we make the river not able to do it and the person not want to, with the advantage that we might not have to actually restrict their freedom. I suppose that the person has a mind is just another complexity in the general terms of “stopping them doing it”, but I think it’s a large enough difference that we can’t really reduce it enough to say they’re just middle mechanisms. That we don’t actually have to levee the river (effectively) is to me a big enough difference that I would consider them not to be the same.

Mmmm, I think we were quite close for a moment - up until this point…

The existence of the levee is analogous to the existence of the penalty on the books. To suppose what would happen if the levees weren’t there is to suppose what would happen if the penalty wasn’t on the books. And vice-versa.

But that may be just a nit and the larger point I’ve been pressing may seem too facile to be of any use. Without going too much further afield, I’d just like say that this idea is an important foundation. That it seems simple and/or obvious to you and others is quite encouraging. To those who think of responsibility as being something akin to objective facts in the Universe, it can be difficult to achieve even this simple common ground.

Would punishing a tree help deter other trees from falling on things? It’s not analogous. It may be true that criminal punishment of murder isn’t a spectacular deterrent, but it may be much more reasonable to speculate that a society devoid of punishment for murder might lead to FW-zombies more easily calculating that murder is the answer. After all, if it’s a question of pros and cons, if there’s no punishment or consequence then the murder in question becomes an easy sell.

Right. It was a poorly thought out and misleading tangent.

Exactly. This is the main point. Free will is not a really a consideration in why we punish. A deterministic world using levee-like punishment is indistinguishable from a world with a sense that some kind of free will exists and responsibility is a kind of objective fact. This would be an important concession from most supporters of free will.

I agree. We should punish them for the same reason we punish each other. It’s a necessary part of making society run.

Why ? If anything, they would be on a higher plane of value than us, being more aware. They would be justified in pointing out that WE, not they, are the zombies. We are the ones that walk around unaware of most of why we do things or think about them, or even how we do so.

This paper (warning: PDF) discusses many of the points raised in this thread in great detail. Hopefully others will find it as interesting as I did.

I’m fascinated, but a little confused, and while not wishing to hijack would like to ask what the PC apeman sees as the essential differences between Fred the fw-zombie’s decisions as to what weights to assign different factors in his actions algorithm – decisions he may later doubt – and the decision making process of a Free Willed individual.

If Fred can doubt his weightings, or worry that he may have missed a factor, can he not also realize that he weighted wrong or missed a crucial factor, and so realize that his course of action was not optimal? True, if asked whether he’d repeat the same action, at the exact same time (in the past) and given the same information the honest answer is “yes”, but given that his weighting and data can change if he had considered his action for any extra time the answer might have been different.

What I don’t understand is how this is different from the decision making process of a supposedly free-willed individual (rather than a fw-zombie)… with the exception that Fred can tell you exactly how he came to the decision.

It seems to me that if you give any individual the ability to “tell you exactly the factors that went into their decision, what weights each factor was given and what the final score was for each option they considered”, then the decision making process of that individual will not be distinguishable from a fw-zombie.

Hello, Apollyon
Don’t feel even a little confused. You’re dead on. In every free will debate, if it goes on long enough, the topic of “responsibility” tends to come up. Usually it’s offered tentatively, looking for clarification. Sometimes it comes in the form of an argument from consequences: If there wasn’t free will, we couldn’t hold people responsible.

What I’ve only hinted at here is my claim that we too often reify things like right, wrong, responsibility, justice, punishment, morality, etc. and forget that these are just useful ideas we have developed. They have no other, ultimate/universal/objective/whatever justification.

Yes. I tried to make Fred as human as possible. In fact, I do think of Fred as a human (perhaps after some fantastic neurosurgery).

Absolutely. The conclusion is free will is not a necessary ingredient for the (somewhat) ordered society we now have.

Of course this conclusion was based on the premise that libertarian free will doesn’t exist. For some people, getting them to imagine a world without libertarian free will AND that the illusion of free will is unnecessary is, I think, a triumph in itself. So now I’ve exposed the limb upon which I’ve placed myself. I think maybe I can change Fred the fw-zombie’s story to one of Liz the lfw-zombie. In this version, we do have libertarian free will but Liz does not. Do you think we’ll come to the same conclusions?

I’ve never understood the argument that 'he couldn’t control himself/temporary insanity type pleas…or he couldn’t help doing the murder because of reason x beyond his control…type arguments for a lesser sentence.

I mean…if you can’t control yourself, then you need to be locked away until you die or be executed! You have no control!

If you did have control, maybe you can be reformed.

Would it depend on the flavour of lfw that we possess, and which Liz does not?

If we all have some form of non-physical mind or soul to explain our lfw (and the existence of souls is a given for rationale decision making, such as in a court of law) and Liz is soul-less would this be different from an lfw explanation of ordinary randomness for “elbow room”, a random-ness to which Liz is not subject?

I can actually see an argument that if Liz is an entirely rationale agent, but not subject to accidents of randomness in her decision making (and is in this indistinguishable from Fred as far as I can see), then if anyone is deserving of diminished responsibility it is us, rather than Liz. In our case whether the gun was fired or not could be ascribed to chance (or at least that knowing heavy punishment awaited us may or may not have changed the outcome), whereas in Liz’s case the chance of punishment and the severity of that punishment would have been a direct and rationale weighting in her decision to kill.

(IANAL, but I suspect that an lfw murder defense may not solicit much sympathy: “I was intending to indeterministically either shoot or not shoot yer honour… I wasn’t my fault that his probability waveform collapsed that way!”). :slight_smile:

On the other hand, if we argue that Liz is soul-less (and that souls are the source of free-will) would we not also have to argue that Liz is not a rational agent?

That’s why it’s called temporary insanity. If you’re permanently insane, then yes, you have no control, but if some fleeting circumstances caused you to lose control for a moment, then that doesn’t apply.

If we consider the sentence a deterrent, then it makes sense to reduce the sentence in cases where there is no deterrent effect. That is, if cosmic rays strike your brain and cause you to momentarily lose the capacity for rational thought (which leads you to commit a crime), you’re not thinking about the possible jail time you might do, and neither will anyone else in your situation.

Sending you to jail won’t prevent anyone else from committing the same crime if they get struck by cosmic rays, and since your affliction was only temporary, there’s no other reason to lock you up either. All it would do is increase the amount of suffering in the world, for no benefit.

Yes, I think this is a good point. LFW would have to be something that is, at least conceptually, a separable feature. It must be possible for Liz to have all the characteristics of being human except for LFW. If it is separable though, I don’t think it matters if it is, or comes from, a different metaphysical substance.

Since I don’t have an idea of how LFW can exist, I fear it may be hard for me to paint an adequate picture. But here’s a quick sketch…

We’ve already imagined the Compatibilists world where we have the experience of feeling we could have done otherwise but Fred does not. What if our experience is not an illusion. What if this feeling of being able to have done otherwise arises because we actually could have done other wise. But in this sequel, Liz could not have. She, like Fred, not only lacks the experience of free will, she lacks actual, libertarian free will.

Just like Fred, Liz knows murder is wrong and feels it is immoral. Actually, there is no substantative difference between Liz and Fred. Like Fred, she too values her life and not being restrained from acting on her desires. (I’ve avoided the word freedom here for obvious reasons.) She knows there is a death penalty for murder (in her jurisdiction) and that is a significant factor in her calculations. If Liz murders Vicky, Liz should still be punished - even in this Libertarian World. The punishment must be in place and carried out in order to keep the negative weighting for murder high in the calculations of other lfw-zombies.

Now if I’ve been convincing with the Fred and Liz stories, I’ve demonstrated that neither libertarian nor compatibilist free will are necessary ingredients in diminishing an undesirable behavior. Penalties combined with deterministic processes can do, or may actually be doing, all the work. In considering a deterministic versus a libertarian free will scenario, the former strikes me as the more parsimonious but maybe not the more obvious.

I’ve given it a good think and all the arguments I thought up against this I ended up disagreeing with. So I think i’m with you on that - though I still think the difference in physical/mental deterrent is a significant enough difference. But thanks for giving me something to think about! I’ve been puzzling it for a good while today. :slight_smile:

Wow. Thank you for saying so. Right or wrong, I’m flattered you considered it worth considering.

I’m not sure whether this hasn’t become a zombie thread (pun intended) within the meaning of the SDMB’s rules, but I got here from the free will thread linked in the OP, which is still active, so I’m going to post. If everyone has moved on to other things, so be it.

IMHO, the determinists (and compatibibilists) who have responded so far are answering the wrong question. If the contention were made (which I’ve not seen done) that there’s no point in punishing criminal behavior, since the behavior is already determined, it would of course be valid to argue that deterrence has value. This, however, isn’t the issue. The question is whether it’s fair and whether it’s just.

Here’s the rub. The person we’re punishing has wronged notwithstanding the prospect of penalty. By hypothesis, Fred could not have acted otherwise. How, then, can punishing him be justified? (In fact, society relies on free will conceptions, but whether these are valid is the point of the thought experiment.)

If the argument is “that it works,” consider this alternative. Suppose that we say, “If your break the law, we will punish not only you but someone close to you (e.g., a parent, a spouse, a sibling or your child).” It seems to me this would be a very effective deterrent. Much more effective, in fact, than punishing only the “perp.” But, would it be fair and would it be just?

Common sense tells us that, whether or not it works, such a system would be neither fair nor just. How, then, is punishing only the “perp” different? Unless we assume Fred could have acted differently (the very thing excluded by the OP), punishing him is indistinguishable from punishing his parent, etc. If neither he nor they caused the behavior complained of, mere utility cannot justify punishment.

Actually, the reason for this thread was to deflect the inevitable “free will is necessary for responsibility” from the Free Will - Does it exist? thread. But you are correct in that the position being advanced here is not that there’s no point in punishing behavior. (I omitted the word criminal there. Hopefully it will become clear why.) The contention is that punishment is not a result of transgressing universal/ultimate/objective standards of what is fair and just. Establishing that there are such standards is, to put it mildly, problematic. Punishment is more simply explained as a means of achieving subjective desires.

What does it mean that a person has wronged notwithstanding the prospect of penalty? If you mean wrong vs. a standard we’ve created I would agree. But why did we create those standards? If you mean those ultimate standards, you have your work cut out for you proving they exist.

It would be an effective deterrent. And it would affront our sense of fairness and justice. Again, are those based in objective standards? It seems more likely to me that our current balance between punishment and pursuing our desires is one that has evolved with us as social animals. It didn’t have to turn out this way.

If punishing the parent, etc., was as effective (and had similar or fewer side effects) we very well may have developed that system. And it would then feel like common sense to punish the others.

Here’s the bottom line. If someone (or a group of someones) wants less murder, can* he (they) do something to cause there to be less murder? Is having punishment more likely a result of those subjective desires or are there objective standards along with the objective responsibility that we must* enforce them? If the former, then consider such a system so unconsciously evolved and held for so long that things like right, wrong, justice, responsibility, punishment, criminal, etc. start to seem like the latter. It’s not that these things don’t exist. It’s that they aren’t what we may think they are.

*Which view is more like the kind of free will we’d like to have?

I’m sorry, I’ve read that through several times - and reread the whole thread - but I can’t figure out your point. So, I’m going to make a few comments, partly to note points of agreement and partly to expound a little on my views. Then, I would ask you to restate your position. It would help if you stated it in narrative form, rather than as point-by-point refutation.

Let’s start with where we agree. Of course there are no such things as objective standards. Not sure I’ve ever seen anyone argue there are in this context. Laws are things crafted by people, to further ends they want furthered. Second, I agree that laws (and customs) have evolved over time, largely because people found them to work, or at least to work better than other systems. Which is not to say, however, that they always work well, nor that they are necessarily the best systems. That’s why we have debates about these issues.

The funny thing is that this last concession is the only reason you have a horse in the race. If we stipulate there are no objective standards and look to those which have evolved within society as the best answer we’re likely to get, it follows that LFW is more-or-less true and your compatibilist hand-wringing is out on its ear. To my knowledge, no society has ever established a legal system based on those premises. In the West, philosophers have been railing at us since at least Hume that free will is incoherent and the court of public opinion (e.g., as reflected in the laws it has passed and how criminal trials are conducted) has done nothing but yawn.

Let me pause to explain briefly my point of view. I’m a materialist, an atheist and a believer in volition (though not LFW). I’m also a believer in soft (or psychological) determinism, meaning I think people have less control over their behavior than the current (American) legal system assumes. For example, the Mr. Puppet and Boys From Brazil problems discussed in the paper linked above by marshmallow resonate with me. The first time I read Helter Skelter, my main reaction was that, given his upbringing, no wonder Charles Manson was so messed up. In the real world, I spend little time arguing with determinists and compatibilists. In fact, I hardly ever run into them. Rather, I spend most of my time arguing with libertarians and hard-ass conservatives, who I think have an overly simplistic view of human behavior.

That said, I do, as mentioned, believe in volition. And believe further that the current system could not be justified without it. As discussed in the paper mentioned above, desert is bedrock in criminal law. As in, the perp deserves punishment. Or, stated a little differently, deserves or is legitimately used as an object of deterrence. Given my soft determinism leanings, I would change the system in several ways - for example, making it more rehabilitive than punitive - but I don’t dispute the fundamental premise.

Since you do dispute the fundamental premise, it seems to me that you should be disputing the whole system. How you come to a different conclusion eludes me. Perhaps you can explain.

PBear42,
Thank you for the sketch of your point of view. It was quite helpful. We seem very much similar in our beliefs. So I’ve gone back and reread your [post=9602709]previous post[/post] in that light. I’d like to back up and examine where we aren’t communicating.

When you said “The person we’re punishing has wronged notwithstanding the prospect of penalty” it struck me as odd. It sounded like an assumption of objective right and wrong. When I read it now in the light of your disbelief in objective standards what am I to conclude? That our laws that could have evolved meaningfully without accompanying penalties? Surely not. They would have had no effect. My position is that having wronged is an ultimately meaningless concept. It’s only our label for those actions we want fewer of and are moved to reduce. Inherent in all this is my position that penalties necessarily accompany laws. (“Laws” being a broader metaphor for any prohibitions, not just human codified law. “Penalties” means, most generally, imposed negative consequences.)

You also proposed that the wrong question was being asked - the question should be “whether it’s fair and whether it’s just”. This too struck me as an objectivist position. My apologies if I misread it. How do you define fair and just? I see them as merely our measures of how satisfied we are with the effectiveness of our laws and their accompanying punishments. They are evaluations of desirable results as well as undesirable side-effects. They are advanced, later parts of the evolutionary process of selecting and refining the means of achieving what we want.

It is probably best if I refrain from my habit of overloading posts with too many ideas. If you wish, tell me how your understanding of these ideas differs and we’ll go from there.

PC

Moving one or two steps at a time works for me.

What did I mean by “The person we’re punishing has wronged notwithstanding the prospect of penalty”? Two things. First, that Fred has broken the law. No implication of objective standards intended. Second, that deterrence has in this case failed, and that punishing Fred won’t change that. You are defending punishing him on utilitarian or consequentialist grounds. I have problems with that, for the reasons mentioned, if we accept your premises. As further mentioned, I think the solution is to reject the premises, which the status quo indeed does not accept.

As for how we figure out what is fair and just without objective standards, the answer is that we have to muddle through somehow. The problem doesn’t go away because we don’t have them. Generally, we proceed (as you have) by positing hypotheticals and testing whether the results of applying one rule or another seem fair and just. It’s all pretty ad hoc, but there’s no other way to go about it. You’ll note that the Greene and Cohen essay linked above by marshmallow proceeds the same way.

Here, I have challenged whether utilitarianism alone can justify punishment and have posed a counter-hypothetical, i.e., punishing someone close to the actor. You seem to agree this would be effective, but unacceptable. If I misunderstand, please explain. You could argue this isn’t analogous to punishing Fred, but haven’t yet. All I will add at this point is that I sincerely think it is, if we accept your premises. IOW, I’m not just playing word games. Whether I’m right is, of course, an altogether different question.

Finally, as regards your assumption that “penalties necessarily accompany laws,” I mostly agree (the mostly having to do with the soft determinism issues mentioned previously), but I get there on different assumptions. I assume we own our decisions, i.e., have volition. Fred, by hypothesis, does not. My inability to make sense of such a system is one of the things which causes me to doubt your premises.

PBear42,
I think I may see where we are at cross purposes here. I have the impression that you are thinking along the lines of what the consequences for our transgressions ought to be. This is an interesting and worthy topic on its own but not terribly important to the reason for this thread. I’m using our system of prohibitions and penalties as it is now for a foundation of a different argument.

It is true that Fred has broken a law and deterrence did not work in his case. That is beside the point. Here is the foundational idea: Deterrence may still work for other potential murderers. In order for the deterrence to have any weight in their calculations, the penalty must be invoked on Fred.

What we have done is created “responsibility” and assigned it to Fred. The tricky part (for some) is that the term contains no grand truth value. It’s just shorthand for “you’re the one who will be punished”. “Punishment” encompasses no grand truth value either. It’s just an imposed negative consequence designed to reduce unwanted actions (by others).

It is my claim that this (perhaps unorthodox) view describes our system as it has actually come to be. Though not the case with you, words like responsibility and punishment are often unwarrantedly reified as objective truths rather than merely labels for evolved mechanisms. If my description is a workable explanation, it shows why punishing Fred, the free will zombie, has the same effect as punishing someone who does experience a feeling of free will. If successful, this argument deflects from the other thread the inevitable objection that responsibility requires free will.

PC