Is it wrong to not believe in free will but still believe that evil should be punished?

In theory, “determinism” is an entirely valid and sensible premise upon which to base (the non-retributive part of) the punishment of criminals. Punishment is intended to modify behavior: to introduce a causative element that is supposed to result in the subject becoming disinclined to repeat the offending behavior.

If the universe were truly deterministic, punishment should be a highly effective corrective measure, at least, when properly applied. That it is not highly effective does not disprove determinism, because the factors that guide and influence behavior are broad, complex and varied. It does, however, suggest that mechanistic determinism is a somewhat inaccurate and/or incomplete model.

Just jumping in here for the first time, so I apologize for any discontinuity…

I’m not sure of the name for it, but I believe that everything a person does is determined by
-your current brain state, on the processing side (programming)
-your current brain state on the inputs side (inputs from senses and hormones, which are driven by inputs and program outputs)
-quantum randomness in how your hardware actually works (variations in neuron firing etc.)

None of these are controlled by you. I think that you can use your programming to change your programming, but the inspiration from that change is still just what is determined by the three elements above. So by standard definition of “free will”, you have none.

So to the OP - If I am thinking of stealing, and my inputs tell me that people who steal often go to jail, or are otherwise treated badly by society, then I am less likely to steal, vs. if I had no such inputs.

Public punishment changes inputs for everyone, so that their deterministic programs are more likely to output desired actions.

Vengeance is counterproductive.
Keeping someone off the street if they are likely to repeat the offense is common sense, but rehabilitation should solve that if a real effort is made.
Deterrence is the reason for punishment.

To paraphrase (please correct me if I am misinterpreting)

  1. If Premise A is correct, Action B would have Effect C when properly applied.

  2. If Effect C does not occur, it does not disprove Premise A, because complexity.
    I’m with you so far.

  3. But Effect C not occurring does suggest that Premise A has a flaw.

Nope.

Item 2 acknowledges the non-ideality of Action B.
But Item 3 ignores the possibility that such non-ideality can cancel Effect C even if Premise A is completely correct.

Premise A: Gravity makes things fall
Action B: Drop an object from a height
Effect C: The object falls

If the object does not fall, it does not necessarily mean gravity is a somewhat inaccurate and/or incomplete model.

It may be that an updraft or magnetic field is preventing the fall. The total system may be inaccurate in that it did not account for that complexity, but you can’t out of hand blame the simple premise at the beginning.

This is indeed the standard framing.
My position is that the standard definition of “free will” is self-inconsistent and meaningless. The fact that free will doesn’t exist, is not some limiting factor in our universe, it’s a fact in any universe because the concept itself is silly.

And you alluded to some of the reason for this yourself. The notion that you can predict someone’s actions based on a god-like knowledge of the universe is supposed to be crushing to free will…why didn’t the discovery of quantum mechanics change that? Also, free will is still often defined as “could have chosen differently” and quantum indeterminency, with a side helping of chaos theory, imply that this is indeed true. Why is this not free will?

It’s not considered free will, because what we think of as “choice” is a reasoned decision based on our past experiences and preferences. i.e. the very definition of choice is hinged on us being casually connected to the universe. Quantum fluctuations are not “reasoned” in this way.

But what does that imply about free will? Because it seems we’re looking for a choice that isn’t casually connected, yet at the same time, by definition, it must be.

And finally an exercise I mentioned upthread. Forget about our universe. Imagine a cartoon universe of magic and souls and free will. How do the entities in that universe make decisions?
That is, how can free will even work in principle?

Quantum randomness/chaos eliminates predictability, but does not create free will.
It just adds a random number generator to your completely deterministic program. The random number generator is tipping the balance, not “you”.

I thought I could answer that, but it seems I can’t. The best I can do is define a soul as the thing which gives us free will, and do a lot of hand waving, which obviously is not an answer.

Anyone know how it has been defined by others?

Which is an important realization.
I think you’ll find no-one will be able to improve on your definition, so I would urge you to consider the possibility that the famous philosophical debate of free will is actually kinda silly, and born out of two factors:

  1. The psychological feeling that our internal mind is separate from the environment. Somehow it feels constraining to conceive of our decisions being a product of our experiences and neurology, but as I say, what else can a reasoned choice even be?

  2. Religion. Religions such as Christianity require spooky free will. So that god, our omnimax creator, can somehow be 0% culpable for evil human actions. This doesn’t make a lick of sense, so it’s handy that this term “free will” is completely inconsistent and meaningless. It essentially drops a big curtain down over the stage where an explanation is supposed to reside.

What is “quantum randomness”? Are you interpreting the Uncertainty Principle as “randomness”?

Sorry, “quantum randomness” was sloppiness on my part. This is shorthand for the unpredictability of physics at the lower levels.

Congratulations! You’ve reasoned yourself into a position where “reason” becomes both irrelevant and meaningless. Without free-will (or something functionally equivalent) you can’t have agency. Without agency, there’s literally no point to anything. The best you can do is describe that things happen. The marble falls under the influence of gravity; the person kills another under the influence of its environment. Neither requires “reason” or contains moral content.

Maybe you should be open to the possibility that in order for the universe to be consistent and complex enough for us to live in, it must contain things that are true but unprovable within our universe.

Whether it requires reason is a sleight of hand here. The simple fact is, when I reason out something in my mind, that reasoning is how I come to that decision. My brain is a machine for processing information, for creating internal models and finally bringing all that information together to make decisions. Yes ultimately my brain is physical but so what? Whatever substrate my mind is made of it has to be causally connected to the universe otherwise how would it be more than just a random factor?

I’m open to the possibility of there being true but unprovable facts. And I’m open to there being psychological misconceptions.

So how do we proceed?

If I were an advocate of the free will concept, I would start with defining what is meant by free will. And, given what we know about the brain, I might try to speculate on why a wholly brain-based deterministic model is inadequate or how frankly any way that any form of free will might manifest. I would not just insist it must exist because otherwise the universe would not be consistent or complex (or whatever on earth it was you were trying to say there).

Ok, but so what? So the marble fell through a complicated path, like a pachinko machine. If you’re an automaton then no “agency” can be assigned to “you”. Reason is irrelevant; morality is irrelevant. (Communication and discourse would also be irrelevant, yet here I am still trying. It must be my programming.)

For the record, I would define “free will” as the ability to make choices. And I’m defining “choice” as selection from an option-space that may be determined algorithmically but is not predetermined by the state of the universe before the choice. (There’s a lot more to unpack there, but I’m trying to be brief.)

Without “free will” or something like the ability to make (more or less) unconstrained choices, how do you define reason, morality, agency, or any of the other things we debate in this forum?

I’m saying it “must” exist because otherwise “I” don’t exist. If there’s no free will, no ability to exercise relatively unconstrained choice, then the term “I” is robbed of any content. To the extent that “I” am responding to you and trying to prove something, there must be free will that allows me to choose what to do and how to do it. Otherwise, you’re just arguing with a bunch of pachinko balls that happened to fall on a keyboard and spell out this reply.

Then what, precisely determines the selection outside of a predetermined algorithm and the inputs thereto?

“I” do. → What is “I”? → The thing that chooses.

Circular? Yep. But we all behave as if it were true anyway.

No, not really. “I”, the perceptual singularity (“soul”) has not been conclusively established as an agent of effect (or even of affect). The rational brain appears to calculate with little or no direct influence from “I”, which is primarily an observer. “I” is the embodiment of the survival imperative, for which no dynamic function is evident.

Actually I’m redefining the words into usefulness. The moronic definition of free will literally has no sensible definition at all. Compatiblist free will, on the other hand, at least has a definition, whether or not you agree that it’s happening.

But I don’t have to redefine “choice”. It already has a fine definition. According to google the definition is “an act of selecting or making a decision when faced with two or more possibilities” which suits me fine. All I’m doing is being intelligent while considering what’s doing the choosing, and why they’re choosing what they do.

Yes you can rewind time and replay it, and in a fully deterministic universe all the agents in the universe would proceed to make the same choices a second time because their thoughts, reasons, knowledge, and circumstances are the same the second time around.

You need to keep in mind that the goal of choice is not to be unpredictable - that’s a red herring. The goal of choice is to make a selection. When agents make selections, they do it based on personal reasons. That’s the difference between an agent and a rock or coin - agents react to stimuli at an internal level to select reactions based on internal state and inclinations. Contrast this with a falling rock or a coin flip, where the object’s internal state has no awareness of or reaction to things.

That’s what agency is - making decisions based on internal state and preferences. I defy you to come up with another definition of agent or agency that withstands cursory examination.

Bolding mine. This condition seems tacked on and rather arbitrary, and is not included in other definitions of choice I’ve seen. Why do you include it?

From the perspective of the actor, the “state of the universe” at any given time cannot be determined (time itself is, overall, indiscrete). There is no question that the actor’s understanding of the prior state(s) of the universe have a significant influence on the actor’s choice-making. What is not clear is whether a thing (the universe) that is indeterminable can itself effect predetermination.

It certainly feels to me like I have the ability to make choices that are not predetermined by the state of the universe. For instance, right now I have a choice whether to write and post this reply or not, and I could take either option by exerting my free will—it’s not predetermined before I actually make the choice.

This feeling may well be an illusion. But (as I have said before in other threads) my experience convinces me that I have either free will or the illusion of free will—and I suspect there’s no way to be sure which it is without making some question-begging assumptions.

All of which is, I think, beside the point of the thread title’s question. Either people have the ability to make choices or they don’t. If they do, then it’s reasonable to hold them responsible for those choices. And if they don’t, then we have no choice whether or not we hold them responsible.

Note how you’re drawing a distinction between “the state of the universe” and the “I” that “[has] the ability to make choices”. This is important! When you talk about an agent that makes choices, you necessarily have to define what that agent is, and draw a distinction between it and the stuff that you don’t consider to be part of the agent. If you choose to go through the right-hand door because you don’t want to go left, that’s a choice. If you only go right because the left-hand door is locked, that isn’t one.

So if “state of the universe” (not counting the agent) is what’s causing the events, that’s not a choice the agent makes. But if the agent themself makes the decision, then the agent is the one that is making the choice. And this is true no matter how deterministic the agent’s decision making processes is. If I really, really, really prefer once option over the other, to the point that I would never choose the other one, I’m still making the choice myself.

People clearly have the ability to make choices, because we watch them doing it. What we can’t know is whether they are making these choices for purely mechanistic reasons - perhaps they’re philosophical zombies or something. But that part doesn’t matter - they clearly make decisions, and mostly in rational-ish ways to boot, so it is reasonable to react to their behavior as information about their future behavior, and to attempt to change their future behavior by changing the reality they are making decisions in.

Note - Actually people totally can’t be philosophical zombies, but that’s a completely different discussion.

I would say that this is backwards.
Reason is the “why” of doing an action and necessarily involves talking about past experience and personal characteristics. Even in a non-deterministic world of souls and whatever, when it comes to the reason for doing something, we’re going to talk in terms of those two factors.

Why did I eat the tomato? Because I need to eat to live, and I enjoy tomatoes. But neither of those reasons are chosen by me – I didn’t choose to be mortal, nor did I choose to like tomatoes.

How exactly would this be different in a non-deterministic universe of souls, and free will? What’s the reason I ate the tomato?

So OK, the first part concedes determinism for determining the option space. But again, for what reasons would you choose one option over another? Regardless of what kind of universe you live in, surely you will describe such reasons in terms of personal experience / knowledge, and personal characteristics, no?

And the second part, as begbert2 correctly points out, is trying to wedge your conclusion into the definition. It would be like saying that the ether must exist, because my definition of light includes that it moves through a material called the ether. You don’t get to do that.