Determinism vs. Free Will: why care in the everyday world?

Mijin,

I can understand why you’d think that … depending on the meaning you attach to the phrase Free Will. Hellestal I believe was quite correct to focus on determinism as the basis of discussion that on FW as it is a much less fuzzy phrase. (Given that my understanding of FW as part and parcel of my “I” being, my gut reacts like monstro’s dad did - without the experience of FW “I” cease to be so OF COURSE I have FW.)
So how about if we revisit the OP restricting to what the practical implications of a belief in a causally deterministic world (as rigorously and precisely defined by H) are? What logically follows?

Does such a belief obviate moral responsibility and culpability for our own actions?

Does it preclude punishing others for transgressions against the rules of the society (such as rules against murder, rape, child abuse, to theft, or simple rudeness)? Does society have no basis for deciding guilt and punishing those so found?

Should it be a cause that results in a greater embrace of a stoic philosophy?

Should it be a cause that results in a greater embrace of empathy for those who behave in ways that seems deviant from typical societal norms?

Should it be a cause that results in a greater embrace of ignoring inequality in our world (especially by those who are the haves)?

Of more fatalism?

Of more feeling of responsibility as your current actions are the cause for others even as they have been caused by others?
Is it logical that a belief that your choices are exclusively the result of events not really under your control at all should cause you to choose differently?

Do any of those, or any others that any can come up with, more rationally or more logically follow from a belief in an H defined causally deterministic world than any other?

And on preview again I see Hellestal has written! I shall return!

Aaand, not seeing a very clear definition offered up there I don’t think. Nor any reason to object to the simple one I had proposed.

More a suggestion that we come up with the meaning by some inductive process using the example of a game program. But the rigor we are trying to create is more of a deductive system.

It seems to me that you are attempting to do is a bit backwards: here is what I want the word to apply to so I will convolute a definition that fits that.

You want “decision” to require: “a map” (which could be one-dimensional or any n-dimensional and of any number of inputs more than one); a goal; and an optimization algorithm.

Not quite sure how you come up with the definition or that such would a standard understanding.

But let’s go with the examples approach. For each of the following is a decision being made and why?

  1. A ball is placed at the top of a hill and rolls down, settling into position A. The “goal” is get to as low of an energy state as it can. Its algorithm is following the rules of physics. Okay, no map.

  2. The E. coli with its comparison of the concentration of food source at time t+1 to what it was at t (a map minimally of concentration over time and possible of space) and flipping the fraction of movement as run to tumble accordingly. Does the the bacteria have a goal? Not in some mind that it possesses. The result will be getting it to a greater concentrated area of food but it is not thinking of that, it is just that the behaviors that result in that have been selected for. But still the algorithm is one of maximizing food source concentration.

  3. A computer algorithm that monitors a defined two dimensional grid of one thousand boxes divided into four quadrants which randomly has boxes turning black or white. It can at some slower rate turn a box black or white too. Its goal is to keep the quadrants with as close to the same number black and white as possible and is programmed to compute which quadrant has become most out of line and change a square in it each turn it has. A pretty simple optimization algorithm. It will change the color in whichever quadrant is most away from the mean of the four quadrants. Count and flip a box, repeat.

  4. My hypothetical human with no new memory formation who lives in an experienced sentient world completely of the distant past and immediate present who is presented with waffles to one side and eggs with bacon to the other and who either reaches for one, or the other, or neither, depending on various states he is experiencing at that moment that he has no inclining of the causations of. No updating map. Maybe a goal. No real algorithm. But in that moment he experiences volition.

(Inkling, not inclining. I knew that looked wrong.)

BTW nice witticism there but I would argue that even in your example the etymology informs some. Realizing that the work-done-by-hand specifically is in reference to cultivating the land (by hand) helps one appreciate the difference between manure and simple bullshit or any other just crap. Bullshit aint manure just because it is feces and an action that was the only one ever possible to make is not really strictly a decision. We pretend it is just as we pretend that a formula for maximizing the value of X has “a goal.”

Seriously, stating that of course a deterministic world has decisions because I will define decision based on what is seen in the real world is … less than a rigorous approach.

It is a “decision” because we perceive there to be freedoms to make other choices. That perception is what FW is all about, that perception we experience ourselves as part of being a self, and, when considering the actions of others and the moral responsibility for their actions, whether or not we believe that they perceived that they had that freedom and competence and opportunity to do otherwise.

And a Merry Christmas to all. (My family celebrates commercial/family get-together Xmas, so here I am, watching Christmas movies - not an act of Free Will ;))

I see Hellestal building a Bottom-Up model showing how layers of Deterministic “programming” leads to a complex experience.

I see DSeid framing Top-Down thought experiments to frame scenarios.

I think my OP, and the Human condition exists in the fuzzy overlap. I definitely think I am little more than a rock or an e.coli bacterium in how some of my actions play out, and others where I am decidedly choosing. I also see how, when we have time to noodle through it and break it down, the “layers of programming” model holds. Yet I see all of DSeid’s questions of agency and accountability hold in Real Life.

Now what?

Meaning: I still feel Free Will and I still want, from my Conscious Agent perspective, to “feel better” about my choices over time. Again, as we have discussed, that is more of a Locus of Control issue vs Deep Philosophy.

I guess my point is: I love this topic and this noodling. I see it as essential. And I see limits to what is comprehend-able given our limitations and the Paradoxes at the heart of Reality. When is it okay to pause, step back and recognize that to a certain extent we can get caught in a infinite loop?

ETA: so Grasshopper, what is the sound of a Decision? :wink:

Well that is the crux of it isn’t it? Are you decidedly choosing just because it certainly feels that way? And does the answer to that really matter?

Let us be clear - there is solid evidence that what you think are “decisions” made in your conscious mind are heavily at least influenced by beliefs that you are not conscious of. One easy example is the case of implicit biases. We most certainly are not creatures fully of conscious Free Will. But here is where the “why care in the everyday world” comes in. We each can, by (perceived) conscious choice, will ourselves to become more aware of our implicit biases and to make an effort to alter our behaviors accordingly. I believe that perceiving that conscious experience as a choice, as a decision, as an effort that we have willed ourselves to do, is necessary. The fact that the choice had its own causes and will be a cause in its turn, that to at least some major degree it is part of a causally determinist chain that may go back to the Big Bang, is, as a matter of the everyday world, of no meaning, IMHO. As a matter of every day practicality in the everyday world we function, to a large degree have no choice (heh) but to function, with the belief that multiple actions are in fact possible for us to make, dependent upon how what we think, what we value, our various states, and how we analyze the circumstance, and that we are morally and ethically responsible for those decisions. On how we decide and how we will it. People who do not do that are labelled sociopaths and end up in jail (or as CEOs :)).

I disagree strongly with Hellestal’s proffered definition of “decision”: no map, no optimization algorithm is required. As long as we accept that different actions are indeed actually “possible” a creature that experiences pain and hunger with neither map or algorithm who experiences hunger, who perceives food ahead, and who experiences increasing levels of pain the closer it gets to the food source, is making a decision about continuing to move forward or to reverse course. All it is doing is considering the level of pain vs the level of hunger and then taking one of the possible actions according to the result of that consideration. But that disagreement ends up being of no meaning. Because causal determinism or not we perceive that the rock rolling down a hill has no choice, that the E. coli is making a decision of sorts, and that human intelligences considering their options consciously, attempting to factor in subconscious biases that they may have become aware of, are making more willful decisions. Ones that they will judge themselves by and that will impact how others interact with them going forward.

Every piece of it is well defined.

Deterministic system, correlation, objective function, optimization algorithm. The only potentially fuzzy word is “map”, but that’s why I spent extra time trying to lay out what that meant. Now, it’s possible that some people might not be familiar with these terms. In that case, the definition will definitely be “not very clear” because they simply don’t have the background familiarity with each individual piece on its own to handle the combined definition. But for anyone who knows about the different pieces, then they know this definition.

It would hardly be a sin to be unfamiliar with the pieces. I’m not personally familiar with general relativity. But the underlying point here is that that doesn’t mean that general relativity is unclearly defined. I just don’t understand the pieces that make it up. Our days are busy. These aren’t things that most people need to be familiar with.

I’d agree with WordMan that our objectives are different. My goal was to define “decision” in the context of a deterministic system.

I’m not going to try to defend determinism any longer, nor try to argue whether this world might be deterministic, nor deal with definitions that wouldn’t work in a deterministic system. That’s a dead end, as I’m finally beginning to understand. What I want is for people who might not have otherwise had the opportunity to do so to consider what one particular deterministic type of world might look like, and how we might choose to describe the events in that world using everyday language.

The benefits to this approach I’m leaving up in the air for the moment.

If you’re not interested in the exercise as I have narrowed it, that is of course completely natural and understandable. Family arrived a little earlier on Christmas Eve than expected, so although I felt nearly finished with this post, I couldn’t complete it. Looking again now, I see from the new posts that you are apparenty uninterested in the question I’m putting forth here. Which is, of course, fine. It just happens to gather my personal interest, but I’m weird. No one else has to be captivated by this sort of inquiry.

WordMan has it right here. Absolutely, totally right.

Complexity arises from simplicity.

The human mind tends to believe that complex outcomes must necessarily stem from complex inputs. That does not have to be the case. The Mandelbrot set is seeming complex in form. But that surface-level complexity is masking a fundamental simplicity that is almost absurd to look at. I want to investigate a bottom-up conception of a world in order to explore those sorts of features. Not necessarily our world, but a world. But since the criticism has come up, I do for a moment want to discuss the “rigor” of definitions. Because of this comment.

There are basically two approaches for a “proof” of the Intermediate Value Theorem. One way is by saying, look, if I take a pen on graph paper from PointA located at 0 to PointB located at 5, then at some point, I’m going to have to hit all the values between 0 and 5 if I never pick up the pen from the paper.

Or I can point to the “rigorous” proof using the formal notion of continuity. When I was younger, this formal proof used to piss me off. This was something that was obvious. We have the idea of the thing just from the pen on the paper, so what’s the point of this extra nonsense? But as I eventually came to see, there were sometimes certain advantages from putting definitions on a formal basis. It leaves no room for the ambiguity that can sometimes detail discussion. Without that formality, people might put another piece of paper down and say the pen never left “the paper” even when it did not travel the points between 0 and 5. Or they might fold the paper, and say that counts. This stuff is very clever. It takes an active imagination to work through the puzzle in different way.

But it can also comprehensively miss the point.

One way of defining a word is: I know it when I see it.

“What’s green?”

“It’s that color right over there.”

You can point at the aggregate state of the world, the “emergent” properties as we perceive them. But there are potential problems with this. What if you’re talking with a blind man? Then they can’t perceive what you’re pointing at. What if you’re talking with someone with much more fine sensory skills? Then you’re clumping together features in your definition that can be readily distinguished by the person you’re talking with. It can take a lot of conversation, and potentially a lot of confusion, for the other person to finally figure out that your word for “green” does not distinguish color superGreen-A and superGreen-B not because you’re uninterested in separating the two, but that your word for “green” does not make that distinction because you are physically incapable of perceiving the difference with the current set of eyeballs that you have.

This isn’t some random complaint here.

When people in the past have describe their own sensation of eff-doubleyuu, it sounds absolutely nothing like my own personal sensation of eff-doubleyuu. I’m not going to deny people are experiencing… whatever it is that they claim to be experiencing. I’m not going to say people are feeling an “illusion”. Maybe they have the ability to see more shades of green than I do on a fundamental level, and I can’t recognize what they’re talking about because I don’t have the apparatus for it. Or maybe I personally have the ability to see more shades of green, and so when they point to their experience of eff-doubleyuu, I can’t recognize what they’re talking about because they’re conflating together internal experiences, which I have the depth of awareness or some shit, to recognize as several different experiences which should deserve several different words if I were to try to discuss the topic at all.

But what I do know is nobody uses the sensation of eff-doubleyuu to inform them of how close they’re standing to the walls when their eyes are closed.

WordMan is exactly right that I favor a bottom-up approach. Why am I doing this?

For the same reason I wrote the first post in this thread. One or two people were having trouble understanding the definition and implications of deterministic systems. Determinism is highly related to bottom-up thinking. Two sides of the same coin. In the definition I’ve outlined, and in a possible simulation that uses such a definition, we have agents who are wandering through the world and learning about it who use their updated map and some optimization process before they take action, and so can take different actions from one day to another precisely because their view of how their world they inhabit works has actually changed from one day to another. One day they run from the bunny, and the next day they attack the bunny. And what’s important (to me personally): we can point to particular parts of the code to understand all of these pieces.

The underlying point here is that the system is extremely complex, not predictable except by literally computing it out, and the “decisions” that the agents can make can be extremely powerful within the context of the system.

Given enough sophistication, the internal computation of an agent might eventually become so powerful that we would not be able to tell the difference between a “decision” in the deterministic system and a “decision” in our everyday world as we believe we understand the word.

And that is the point that I have been trying to make. A deterministic system can potentially give us the pieces we need. The question is where we apply the word “decision”. Do we apply labels to our ignorance, or do we apply labels to our knowledge? Do we say the proof of the theorem is with a line drawn by a pen on the paper, or do we say the proof is something deeper than that, and that the line on the paper is a visual analogy of that principle? (And definitely not a perfect analogy. Most of the paper and ink is actually empty space between atoms, not a “true” continuous line.)

We have “decisions” as we think we see them in the real world, but we’re not entirely sure how they work. And now we have “decisions” as potentially defined inside a program that will run the same way every time it is started. One system we understand very well, because it is deterministic and therefore well defined. The other, we understand less well. Our brains are a very small part of this universe, and even if this universe happens to be deterministic, we have no way of actually “proving” that. Instead of using the same word for the two scenarios, we might consider using different words. If anyone wishes to use “pseudo-decision” for the deterministic definition above, then that is completely understandable. It draws a distinction between the well-defined toy system we have imagined, and the poorly-defined everyday experiences as we live them.

But for me?

The lack of any way to distinguish one from the other is the dividing point. I put my label on the system that is well defined. I do this because I will be able to see more clearly whether this definition falls short. This is actually the same sort of procedure human beings normally rely on. When people try to define their own “decisions” based on their personal experience of eff-doubleyuu, they’re pointing at their own head, similar to noticing the color green. But when they point at other people’s decision-making, they don’t have access to that. They’re just assuming that other people feel the same sorts of things inside that they’re feeling, and then they claim that both kinds of experiences are “decision-making” even though we have no direct access to what’s actually going on inside other being’s heads. And this is exactly the right way to do things! People can look at their housecat which seems to be making some sort of deliberate process before they decide to jump on the desk and lie down on their keyboard as their working. The people don’t “know” that the housecat is experiencing volition. They’re taking their internal experience and extrapolating it to what they believe other beings must also be experiencing. They are extrapolating their own mental process to the cat in a limited manner. This is the correct approach.

I’m doing an exactly analogous thing here.

We have this deterministic system that we can imagine, and we have agents that are doing things inside this system that I am personally choosing to call “decisions”. I’m using the word here because it is the most clearly defined place. I am not saying that it is certain that all of our real-world decisions must necessarily follow this path. I don’t know that. But I’m attaching the label to a piece of knowledge, rather than a piece of ignorance, and I’m saying that although I’m ignorant of the world, I can not see any fundamental philosophical distinction between the “decisions” I see in the real-world vs the “decisions” I see playing out on the computer screen. I don’t need two words to describe events that have all appearance of being identical. I don’t need more words for “green” than my current perceptual toolkit is able to perceive and process.

The agents inside that deterministic system make decisions.

I’ve got a fundamental handle on what they’re doing. I don’t necessarily have a fundamental handle on what real-world people are doing. The question at this point is where we give priority to labels. Do we label our understanding with our clear definitions, and then try to apply those clear definitions to situations that seem to match them in all aspects? Or do we give the label to our ignorance? There are, I believe, certain benefits that accrue from putting important labels on our positions of deepest understanding, and working out from there. Because the definition is clear, it’s easier to see the places where it doesn’t seem to fit.

[QUOTE=Hellestal]
WordMan has it right here. Absolutely, totally right.
[/QUOTE]

Hey - could you call my wife?

;).

I think the biggest problem with the dogmatic extremes (caricatures that few if any subscribe to) of both sides is that they view the individual as distinct from the environment. Really, we are entirely part of the environment, not spherical chickens in a vacuum, so the environmental boundary ought to be obfuscated out of the analysis as much as possible. “Environmental Synthesis”, as it were, which tends to fall much closer to the realm of determinism.

The problem with absolute determinism may be a conundrum tangentially related to QM. It is difficult or impossible to observe small things without also affecting those things. Hence, establishing a deterministic model cannot be done because the very measurements needed to calculate the model actually change it. A “Deep Thought” cannot calculate the universe because it is part of the universe (at least in literature), so its own calculations constitute a part of what it is calculating. Similarly, I suspect, exploring determinism may affect enough of the determinants that the result may not be meaningful. Which one of those is not the reflection of you in this hall of mirrors?

Where I have a problem with free will is: where does it start? Take that choice you made and stack up all the factors that led you to it. If you can find the part that originated solely within you, then you can identify your free will. But good luck finding it. And if it means the basis for your choice was wholly unpredictable, how then do you distinguish that from a coin toss? Which is to say, if free will is indistinguishable from randomness, what exactly is it useful for?

I think free will makes use of randomness, the same way a chess-playing computer program makes use of randomness. The program doesn’t just make absurd moves completely at random…but it also doesn’t always open with the exact same move, or always make the same move from the same board-configuration.

Hellestal,

I will leave it at that indeed we are apparently not interested in discussing the same thing at this point. The point of my request to clearly and precisely define the word “decision” was the reaction you had that “Of course there are decisions. How could there not be decisions?” If “decisions” does indeed mean choosing between possible alternatives after consideration, as I believe it does, then in a strict and precise sense a world in which there is only one possible way out of every box means that there never really was any possible alternative and thus no true decision ever really being made. We call it a decision though when we perceive alternative actions were possible.

Now you have brought up concept specifically to apply in a hypothetical deterministic system that you say others may, you think completely understandably, prefer to call “pseudodecisions”, but that you are “personally choosing to call ‘decisions’” … to illustrate how complex decision-like and meaningful actions can occur from simple inputs in a programmed deterministic space. You have shown that you can create a personal definition for the word “decision” that applies within that hypothetical programmed deterministic system in which a designer defines goals and creates optimization algorithms. Okay. I accept your completely understanding why some of us would consider those to be “pseudodecisions.”

But we were trying to work our way back to addressing the op about the implications for the everyday real world. It seems that that question is not of interest to you and that is fine.
I’ve no more to add to the subject of the op myself so I will offer up a riff from part of your post that may be tangentially relevant to what we have been discussing. No doubt complexity arises from simplicity. In that regard I will offer up what was the last board game to have human masters fall to an AI: the game Go. An extremely simple game. A 19x19 grid and just a few rules. But the levels of complexity that result? Much more than chess.From a SciAm article about the success of AlphaGo.

That last bit makes me go hmmm.

:slight_smile:

I am reading a book I asked for as a gift, Time Travel by **James Gleick **(who has visited the SDMB in the past when we started a thread about an earlier book of his - come on back James and join this thread! ;)).

The book is about the emergence of time travel as a concept, citing HG Wells’ Time Machine as the first popular “science-y” instance of it, vs various prophets or getting bonked on the head or falling asleep. He is using the book to explore how it has manifested in global culture.

The chapter I just finished is on Free Will vs a particularly specific example of Determinism called Fatalism, around for some time but championed by Richard Taylor in an essay in the '60’s. He using Logic and Semantics to “prove” Determinism.

Gleick then cites a favorite author of mine, David Foster Wallace and his senior thesis at Amherst (and he also wrote his first novel then, too - argh!). After a lot of digging and explaining, DFW basically says that Fatalism is internally consistent as a Semantic argument but that Free Will is a metaphysical concept and Semantics can’t be used to explain something metaphysical.

Interesting. This feels like my attempt to wrap my brain around things, where Thought and Forms are their own, real, components of our Reality, and focusing exclusively on the Material aspect limits the range of inquiry.

It seems like a pretty interesting question: if prior-causal time travel is shown to be possible, would that be strong evidence for a fully deterministic reality? Because, if there is genuine randomness, it would seem like a universe that allows time travel like that would be rather unstable.

Thanks to everyone in this thread for their contributions. Been reading along in the peanut gallery.

And I am just trying to be the best talking peanut I can :wink:

Interesting thread. I have pondered this one on my own for many decades, since I was a young teen making bad decisions and as an adult making increasingly better decisions. I am of the belief that determinism would be the final call as to how we operate but the method we develop for processing becomes so finite and is based on so much criteria that are often extremely close calls that could go either way that it effectively becomes free will, more so as we gain more experiences.

Now this thread has been reopened, I see I never replied to DSeid’s points to me. I may as well do so now, as they are fairly standard questions that often get asked in this context.

No. Remember Determinism is not Fatalism. When I weigh up the pros and cons of some action, and make a decision, that descision-making was not an illusion. My decision was not already made.

And that decision-making process is me. Even if it’s a static structure within a fixed, 4D crystal matrix of cause and effect, that bit is still me.

That said, from a God’s Eye View it presents problems; god would have set up the whole universe including my inner predilections. He would know everything I’m ignorant of, and how that ignorance would affect my decision-making. It doesn’t make sense that God can just shrug and say “free will”.
Not being religious though, it’s not a big issue for me personally.

If, like the US, you have a criminal justice system based upon punishment, then yes it’s a problem. We already know some neural pathologies associated with criminal behaviour but they are rare, or only affect the very old. But it’s only a matter of time before we find structures heavily correlated with criminality.

But, such findings are far less of a problem for justice systems based on deterrence, public safety and reforming offenders. All these things make sense in a Deterministic universe.

I would be largely repeating myself if I were to answer DSeid’s other questions.

I don’t know why people think Determinists don’t believe in punishment and criminal justice and personal responsibility.

Most people tend to think young children lack free will. Yet we have no problem yelling at them when they do something wrong. We even punish them sometimes! We do the same thing to our cats and dogs, even though most people think only humans have free will.

In psychology 101, everyone learns about operant conditioning. Operant conditioning is an effective way to program an individual through the use of associations. Do a “good” thing and get a cookie. Do a “bad” thing and get a shock. Want someone to do a “good” thing? Promise them a cookie. Want someone to not do a "bad’ thing? Warn them about the shock they’ll get if they don’t obey. Free will isn’t involved in this in any way.

I’m guessing if we didn’t have things like fines and prisons, a lot of us law-abiding folks would be criminals. It’s not our desire to be good for goodness’ sake keeping us on the straight and narrow, but rather our fear of consequence–a fear that’s programmed in us at an early age. A criminal is simply someone who has been programmed a different way. Maybe a theoretical concept of prison isn’t scary enough for them; they actually have to experience it firsthand to learn that “bad” behavior isn’t worth it.

It’s just that the Determinist doesn’t think punishment alone is sufficient to address criminality. There must also be some attempt made to reprogram law-breaking individuals so that they become more flexible in their decision-making. Like, people who have problems managing anger tend to make the same bone-headed decisions–ones that often lead to trouble. So why not teach anger management skills to the incarcerated? Maybe some of them don’t need it, but what’s the downside? Free Willers tend to think that criminals are criminals because they have chosen to make bad decisions, and they will continue to do so until they have a moral epiphany (a “come to Jesus” moment). But Determinists would say nope, criminals haven’t “chosen” anything. They are victims to maladaptive learning and they will continue to act on that learning until they are forced by some external factor to re-evaluate their schema.

FWIW, I believe that dogs have pretty much the same power of choice, volition, and decision-making that people do. That dogs can be trained to do jobs – guide dogs for the blind, or herding sheep – suggests that they are able to understand “yes” and “no” instructions and to internalize them into behavior and even character.

Right.

I think the thing with free will is just based on our intuitions and what feels comfortable to us. It feels like we conjure ideas from nothing. The idea of our outputs being a function of our inputs and neurology makes us feel uncomfortable, but it shouldn’t; it’s the only kind of “decision” that makes sense.
(I’m not sure if this point is related to what you were saying at all, but your point made me think of this phrasing for some reason :))

This is a common belief, particularly in the US, but I don’t agree, unless “consequences” includes empathy, guilt etc.
There are countless misdemeanors I could get away with day to day. The reason I don’t do those things is the same as the reason I try to do some nice things and help people out: it’s part of my identity. I want to feel like a good guy.
And yes, if I’m honest, it’s also because I know if I’m a dick to someone it weighs on my mind for a very long time, maybe the rest of my life.
But rarely if ever is there some X I would do if only there wasn’t a cop standing there.