Is it wrong to not believe in free will but still believe that evil should be punished?

No, our (assuming US, but this applies universally to any modern civilized society) justice system is insufficiently fair. “Inherent unfairness” is part of the fundamental nature of the universe. The mere presence of such unfairness is absolutely irrelevant, as it exists in all things. The only measure that matters is the degree of fairness. Our justice system has serious issues, no doubt, but it offers a degree of fairness so astronomically higher than your head-in-the-clouds alternative that what you are suggesting doesn’t warrant anything but absolute ridicule.

What a bogus question. Our justice system does not make me “feel better in the moment” and that is not why I support it over your extremely naive and completely unworkable alternative.

I notice also that you’re trying to gloss over blunt criticism to maintain your world view.

Address blood fueds.
Address revenge based violence.
Address the historical reduction/elimination of the above by the implementation of centralized justice systems based explicitly on punishing wrongdoings.

Then address how your proposed alternative doesn’t facilitate more of these problems.

I mean something that is actually entirely predictable. Like a stapler. Or even something more complex, but still predictable, like a computer program.

No one is confused over whether or not those objects have free will, it’s only when we get into much more complex objects, like things with wetware between the ears, that we start assigning ideas of free will.

I think I agree, assuming that your last line is meant as a joke.

Anyway, the OP was musing on why we should punish someone “even if it has no rehabilitative or deterrent effect.” which is the situation given in my example.

I disagree that the following premise is valid, that “the likes of Hitler, Ted Bundy, Josef Mengele, Osama Bin Laden etc can’t genuinely “choose” to be evil and perform horrific acts”. With or without “free will” there is some sort of agency to these characters, and that agency can be modified by society’s incentives to not do evil shit.

OTOH, if we caught Ted Bundy, found a giant tumor in his brain, and once removed, he was remorseful and aghast at what he had done, then I don’t know that any further punishment is necessary.

The alternative that was being suggested was a justice system that was more fair. Not sure why you feel that that warrants ridicule. Criticising the unfairness of our current system (or as you say, insufficient fairness, as though that makes a difference) is not a head-in-the-clouds alternative.

That’s exactly why you said that you support it. Maybe not you in particular, but those that you are worried about starting blood feuds if they do not have a justice system that makes them feel as though vengeance has been wrought on their behalf.

Those are both forms of “justice” that are only removed from our current justice system by the state acting in lieu of the injured party.

Where “justice” works, sure. But there are many places where justice doesn’t work. People are punished for crimes they did not commit, or they are punished overly severely for the nature of their infraction. There are others who deal great harm, and are not punished at all.

Could you first explain what it is that you think his proposed alternative is? It sounds like it is simply making the justice system more fair.

Exactly, and if justice is not fair, then it is actually a destabilizing force in society.

I don’t think that you actually understood what it was that you were responding to here. Yes, punishment is inherently evil, you are causing someone harm, and that is evil. There are times when there is a choice of lesser of two evils, in which case, punishment is merited, in order to prevent the greater of evils.

However, if punishment becomes the reason for “justice”, then that is evil.

I think the OP is coming at this from a very typical, and frankly, American, way of looking at things as either we “punish” criminals, or we do nothing. However, there are many reasons to imprison or restrict freedoms of offenders that have nothing to do with punishing them per se, such as:

  • Rehabilitating offenders so they are less likely to commit further crimes
  • Deterring others from committing crimes
  • Protecting the public (keeping the most dangerous off the streets)
  • Compensating for losses (most obviously in the case of fines or damages)

Notice that all of these reasons still make perfect sense absent magical free will.

In fact, I would say the situation is somewhat the opposite of the OP. That is, if we still consider law and order to be about punishment, then we may have to let criminals go once we find neurological or neurochemical explanations for specific behaviours. Whereas, if we throw out that notion, then we don’t need to make any changes, other than sociologically forgetting about “eye for an eye”.

Yeah, I think the most important function of a legal system is order, not punishment, although punishment is part of the equation.

If there’s a serial killer in my hood, I want him arrested, convicted, and imprisoned – whether he feels like he’s been punished isn’t my immediate concern at that point.

If you’re not confused about whether a computer can have free will you’re not reading the right fiction. But suffice to say that there is a definition of free will that is completely compatible with agents that are actually entirely predictable. Its called compatiblist free will. And after spending way, way too much time thinking about this subject I’ve concluded that compatiblist free will is the only type of free will that is not completely, totally, and irredeemably moronic.

This is keeping in mind, of course, that such agents are only entirely predictable if you not only know their current brain state, but also the entire state of the entire universe and how all of physics works, allowing you to accurately predict everything that will be influencing or interacting with the agent in the future. If this does not obtain, you cannot know how their thought processes will determine that they will act in the future. And of course let’s not forget that the kind of predictability in question here is the “I know he would never do that; he’s not that kind of person!” type of predictability, not the “you will do it whether you want to or not, because it is your fate” kind of predictability.

Justice systems rely on the “not that kind of person” kind of predictability; everything about them (except vengeance) is predicated upon the idea that people are predisposed to certain kinds of behaviors, and the whole point is to get it so that as many people as possible can be confidently stated to be the type of person who will color between the lines. You remove people from society who you predict will keep committing crimes if left free, you try to make other people predictably obedient by demonstrating what happens to bad actors, and rehabilitation is when you change the type of person they are until you can confidently predict they won’t commit more crimes. The entirety of the justice system relies upon the idea that people are basically predictable in how they will act. Of course it’s compatible with an universe where people actually are predictable.

If humans are meat machines, then isn’t part of the calculation of whether you should do something or not based on risk versus reward? If it is, then it seems to me adding risk (i.e. punishment), should be part of the calculus, for the good of society.

Without free will, isn’t any “should” an illusion? What does it mean to say that anyone should do something other than what they do do?

I’ve read it, I just don’t take it seriously as anything that will be applicable within my lifetime.

Yes, but a “hello world” program, or even something as advanced as an AI bot for Unreal Tournament wouldn’t even be considered to have that much.

Right, and there is a difference in perception, if not the reality, of something that we can determine the entire state of the “thought process”, and something that we are not able to. A computer program, no one will have any question as to what free will means to it, if it is simple code that can be changed to reflect the wishes of it’s writer. That matter in our heads, OTOH, is a bit more complex, and we do not, and maybe never will, understand it well enough to make accurate predictions as to its behavior.

But let’s also not forget that that is a statement that is oft given right after finding out that they did in fact do the very thing that they never would have thought that they would have done.

Even a well trained dog can surprise you sometimes, and do something you never thought they would do.

It does boil down to that, assuming a deterministic universe.

I’d say that that is more about civilization in general, of which the justice system is a part.

I’m glad that we don’t do that, but the alternative, which we do do, means waiting until someone actually does commit a crime, and then using that as a justification to make the assumption that they would continue to do so if left unchecked.

The fact that that barely keeps crime in check demonstrates just how unpredictable people really are.

People are predictable in the same way that nuclear decay is predictable. If you have a lump of uranium, you can predict that a certain number of atoms will decay over a certain period of time. If you have a single atom, you have no idea what it will do.

No it doesn’t, and failure to understand that renders you unable to understand not only this discussion but all discussions about free will.

Here’s how cognition works in a deterministic universe. There’s this brain thing. It is a computer. Specifically it’s a computer that maintains a continuous database of things it’s aware of that influence how it reacts to things. Knowledge, theories, thoughts, opinions - these are all data that is encoded inside the brain in a way that allows it to understand the things, relate them to one another, and draw conclusions based on internal heuristics.

These heuristics are the brain’s preferences. The outcome the brain chooses is always exactly what the brain wants to do, because that’s an accurate description of how the brain does the calculation of decision-making - it assesses the options, rates them based on various factors, and then whichever one rates highest, whichever one it prefers, is the one it chooses.

That’s how decision-making works in a deterministic system. It also happens to be how it works in real life, because even if the universe isn’t deterministic, cognition clearly and demonstrably operates in a mostly or entirely deterministic manner.

So. If you are building your notion of deterministic free will based on what greek philosophers were talking about when they were pondering whether gods were literally reaching in and controlling people like chess pieces, you are wildly out of date and have no idea how actual compatiblists are considering the situation. The notion that you might be forced to do what you do not want to by some external force or “fate” is definitely not what is under discussion.

A disagreement on a philosophical point does not render me unable to understand anything at all. It just means that we have a disagreement on a philosophical point.

If you are saying that I must agree with your perspective before a discussion may be had about it, then let me know, and I’ll let you talk to yourself without interruption.

Which I am not. That’s a pretty out there assumption on your part.

That’s not what I said.

I suppose your “whether you want to or not” part is part of it, but if the universe is deterministic, if we are simply experiencing the slice of the bulk we are in as we experience it, then what you are going to do is actually predetermined.

It’s like asking if a character in a book or a movie has free will. They may act like they do, they may even think that they do if they have enough self awareness to ask the question, but you can skip to the end, and see that all of their actions were predetermined.

If you are saying that your statement means that we have predetermined fates that are known by some entities but not by others, then I agree that that is of course not true. But if so, I don’t see how that is even close to relevant to the discussion, and I thought that you meant your statement to be relevant.

Anyway, I note that you latched onto your gross misunderstanding of my reply in order to give me a lecture, which I’ll admit may have been a misunderstanding on my part that you meant it to be relevant to this discussion, as I had no idea that you planned on bringing greek philosophy into the discussion as a red herring, and completely ignored the rest of the post.

Do you think that the wetware between our ears has the same type of compatibilist free will as a computer program that displays “hello world”? Do you consider a difference in the free will of something that you can predict, and something that you can only guess as to a prediction?

What kind of predictability are you calling it when you say, “I know he would never do that”, when you actually know nothing of the sort?

Do you actually think that people are truly predictable, on any level?

Sure, if the risk is fair and inevitable.

People risk death jumping out of airplanes or skiing down slopes, but the reward that they find in doing so is greater than the risk.

If the justice system is inefficient or arbitrary, then it serves little deterrence value.

The element that gets elided here is biochemistry. The brain’s electrochemical operation is fundamentally indistinguishable from a sophisticated computer, but that model ignores the effects of mood-altering (natural) substances. Your brain has minimal direct control over your alertness/torpor, your hunger, your horniness, et al, and biochemical signals can vary in unpredictable ways.

Your forebrain may tell you that schtupping with that other person right there is unadvisable just now, but the flood of arousal chemicals generated in response to that person’s particular appeal is stronger that rational wisdom, so the schtupping happens. Or you might get in an argument with someone and feel the anger and tension escalate, become aware that this fight is getting out of control, but the adrenaline has gripped both of you, putting reason at least momentarily out of reach.

Sometimes biochemistry is highly predictable, sometimes not, and how a person will respond to its influences cannot be determined except empirically. This is the one thing that makes the meat-computer significantly different from silicon and may be the main reason we perceive the existence of “free will”.

Might as well address this above all.
Hello World lacks a few things.

  1. A continuous processing loop.
  2. Inputs.
  3. Internal data storage.
  4. A system that analyzes things and selects options based on preferential heuristics.

Using Hello World as your example of a program would be like using a statue of a person as your example person when discussing free will.

You bring me a program that operates in the manner I’ve described human cognition, and then I’ll ask you why it doesn’t have free will. (While reflexively rejecting the subpernatural, of course.)

And I DO think that humans are truly predictable at some level. You just have to pick questions where you have actual data about their preferences. So if you have a regular person’s knowledge regarding whether a person likes eating a specific flavor of cake, and whether that person likes eating a specific flavor of human feces, I think that in many cases it is possible to be certain which they would chose to eat, if they were compelled to eat one or the other.

I consider the chemicals in you to be part of “you”. And while part of “you” (say, your bladder) would prefer to just unleash and let flow on the spot, the system as a whole might prefer to take slightly different action.

The same thing is going on when that small rational part of your brain is hoping to stay calm, but the system as a whole is voting to punch a jerk in the face.

Right, that’s a start, but that’s the simplest example I gave. I also pointed out that an AI written for Unreal Tournament is completely predictable, and that does have all of those things.

You will ask me why my Unreal Tourney bot doesn’t have free will? Because I can choose to alter its behavior at any time. I can tell it to stop targeting enemies, and start targeting allies, I can tell it to stand still and do nothing. I can even make it jump into lava or toxic waste.

Maybe you can explain to me why it does.

I’d say that they are mostly predictable at some level. In your scenario, where you severely limit options and agency to little more than a statuesque “hello world” situation, you will probably find that 99.9+ish percent of people follow your prediction to eat cake rather than the feces, but if you do that to enough people, you will be surprised by that one fellow that elects to counter your expectations.

Your experiment in removing agency from a person does very little to predict what they will do when they actually do have choices. I can put someone’s hand on a hot stove, and predict that they will try to remove it. That has as much predictive power or relevance as your experiment.

You have been given the ability to change its preferences. I can change somebody’s preferences by saying, “if you eat that cake I will torture your family to death in front of you.”

Is that an abrogation of the person’s free will? If you say yes (and I would say yes), then I would say that when you act as an external force to alter the bot’s preferences, you are abrogating its free will.

The question is, what if you have a program where you can’t change its preferences? A sim who is dedicated to jumping into that ladderless pool and drowning no matter how you try and stop him. What will is it acting on? This can be an interesting philosophical question.

Interesting that you had to change the scenario I proposed to sound right. I said:

So I was talking about knowing a specific person’s preferences. In your altered example, I would have to know each person’s preferences - even the preferences of the one who likes feces/hates cake. There would be no surprises.

You can predict with absolute certainty whether I will pick the spicy beef sticks or the regular beef sticks, given a free and informed choice. Not because there is a general rule that people hate spicy food, but because I hate spicy food.

This is not a controversial idea. And yes, it destroys the stupid idea that a person has to be 100% unpredictable to have free will.

Not seeing it. A specific consequence to the action is presented, which “you” must weigh against the desire to eat that cake. “Free will” does not mean that actions are devoid of consequences.

Well, that’s a matter of opinion. I suppose a better example is if I get you drunk. (Intravenously, if necessary.) This is a way to directly modify your brain chemistry and alter your preferences, and thus your choices.

The point is, having an outside agent reaching into your head “God hardened Pharaoh’s heart”-style is the classic definition of an abrogation of free will, greek philosopher style. It was literally what they were worried about. So since you’re doing that to the bots in question, I’d say that whatever free will they might have had has certainly been compromised.

You can create new incentives, but you haven’t actually changed the preferences. I still prefer cake, I just like my family a bit more.

Or do I?

No, I have a choice. If I am starving to death, and you tell me that you will kill my family if I eat, I may still eat. I may let myself starve to save my family.

I wouldn’t say yes, and that is because I am not creating new incentives, I am fundamentally altering its code.

It is acting on the will of the last person to write its code before it was locked out. It may “seem” as though it has free will to someone who doesn’t have access, but it won’t.

It will always react the same way to the same stimuli.

Interesting that you think that, as I did no such thing.

There still may be surprises. Someone may choose to do something other than their preference.

And that’s another thing that is different from a computer program and a wetware based individual. I can choose to make a choice other than my programming, other than my preferences.

And this is with the most extreme example, tapping into instinctual response that is pretty close to being preprogrammed like a bot.

If you know that I prefer carrot cake to chocolate cake, do you still think that you can predict which one I will choose?

So you are pinning all of free will based on aversion to things that an entity finds disgusting? What if your choice is between teriyaki beef sticks and pepperoni?

Not all of experience is about avoiding disgusting or distasteful scenarios. Most of the time, in fact, it is choosing between competing desires. And your absurd black and white condition against pain, disgust, or fear, the only time that you think that you can accurately predict an action (and still not as certainly as you would think), does not relate to the vast majority of decision making that most entities with agency encounter on a regular basis.

Yes, that takes down that strawman quite well.