Another "Free Will" debate.

First, I don’t think omniscience is relevant to the discussion.

Second, if we were omniscient (in a strong sense–i.e., we were capable of evaluating the astronomically huge number of choices available to us), we’d be able to choose exactly the course of action that resulted in the maximum possible happiness in our lives. Whether we took that action would depend on the strength of our meta-processor, I suppose, but it seems likely that we’d end up making ourselves extremely happy with such knowledge.

Well, yes–but if I offered to bet you a million bucks based on the outcome of a rerun game, I bet you’d watch it then, wouldn’t you? And that’s the difference: the omniscience you’re proposing would allow us to make unbeatable bets. It would be an amazing stimulus pushing us toward good behavior–because we’d be aware of exactly the good behavior that we were going to engage in and the happiness that would result.

You may be proposing an omniscience that is not itself a stimulus that changes behavior–that we’re somehow made aware of the effects of our choices, but our processing module isn’t allowed to take that awareness into consideration in making those choices. That’s a contradictory scenario, and not how our brains work. We make choices based on the knowledge we have.

Again, though, I don’t think omniscience is that relevant to discussion, since none of us have it or have any idea of how we could get it.

Because you’re human, and you’ve probably got an empathy processor in your head, because that’s a pretty strong adaptive trait for a social species with a long childhood. You care about the child because you’ve evolved to care about the child.

And there’s nothing wrong with that. We evolved to care for those that we consider to be part of our circle. That circle may include our immediate family, our tribe, our nation state, our species, or every living entity on the planet. And this caring for our circle results in greater happiness for everyone, and the more that we expand this circle, the more pleasure it creates in the world.

Why would knowledge of this adaptive trait cause you to reject it?

Daniel

I agree with Diogenes and Sapo on this.

No, because the concept of free will doesn’t make sense; how can there be a third category ? How can anything be neither random nor determined ? The concept itself makes no sense.

Of course it does. If one side is right, it’s right whether or not it has free will, or is even conscious or alive. An argument in a book has neither free will nor life nor consciouness, yet it can be valid.

My opinion : because the language, the concept of free will is much more efficient, even though it’s nonsense. As we always see in threads like this, it’s much easier to couch things in terms of free will. In my view the sensation of free will is the brain’s shorthand for all the many, many processes and inputs that go into making any decision.

Oh, there is an “I” all right; it’s just not some little causeless homunuculi in my head. It’s a variation of “I think, therefore I am”; I know that I exist; the real question it what am I. My answer : An evolution derived self referencing information processing system. “I” am a conscious network distributed throughout my brain, which is why close examination makes “I” appear to disappear. Look too closely, divide the network too much in space or time, and you are no longer looking at a person, but just a chunk of a person. Rather like how individual atoms in a cell don’t look alive; they aren’t, but the cell is.

Moral judgements are still valid. “You are evil” isn’t really different than “You have bad moral programming” And we can blame the criminal; he is a conscious, self correcting system, while a flood is not. This is just an example of what I just said, that we use the language of free will because it’s more efficient.

Except that there doesn’t appear to be such a module. The conscious portion of our minds doesn’t seem to do much; it’s just aware of the decisions and perceptions made by the other, non-conscious modules ( at least, we think they aren’t conscious ). Personally, I think that our consciousness is probably two things : the sum of the communications between at least some of the more important “modules” that make up our minds, and an “official face” we present to the world; a social construct. It’s easier to deal with a unitary person than a collection of modules. In essence, I think our conscious minds are self-writing fictional characters. We treat ourselves like one, certainly; we try to adhere to our self image, our "character, and we tend to ignore or forget anything we do that isn’t in character - rather like fanboys trying to explain away behavior that doesn’t fit their image of their favored character.

Huh? Dogs eat carrion. What purpose does evolving the ability to eat carrion serve? Horses, don’t, and they are a successful species.

There’s no rule of evolution requiring that every successful species evolve identical traits. We happened to evolve with an ability and desire to analyze our surroundings and ourselves to a very complex degree, compared to other organisms. Our ability to do so meant that our ancestors were able to make predictions, invent new technology and processes, and manipulate their surroundings in a way that enabled them to survive and reproduce more effectively than their relatives who didn’t evolve with such an ability.

We didn’t specifically evolve an ability and desire to analyze the question of free will–that’s just an offshoot of the adaptive traits.

Of course life isn’t pointless. But the only point to life is the point that we give it. And we’ve evolved to give life point. Do you not get great pleasure from hearing a great symphony? Do you not think the composer got great pleasure from composing it? Why do you ask for more of a point than that–isn’t that deep satisfaction and pleasure enough?

Daniel

I read a wonderful science fiction story a few months ago based around this premise. It’s pretty interesting, but my understanding is that the jury’s still out on it–cognitive science isn’t yet advanced enough to settle the question of whether the conscious part of our brains make choices, or whether there’s even an unconscious central processor in our brain that makes choices. Folks are divided on either side of the debate. Me, I’m undecided, but since I can’t really wrap my head around the lack of a “decider,” I’m tentatively in the active central processor camp :).

Daniel

Maybe. But I’m really thinking more basic than that: like the fish in water, why should the subject of believing or not believing in free will even come up? Why shouldn’t we just reach computationally favorable ‘decisions’ by whatever algorithm is programmed into us, and may the most successful algorithm reproduce more extensively?

Psst… dude. (Dude-ette?) I’m on your side. I’m arguing in favor of free will. There are those arguing that if you knew everything about me, and everthing about the situation I am in, (including how the other folks are choosing to present me with my circumstances), then you can predict with 100% certainty that I will dive in front of the bullet.

That degree of certainty can be used to make decisions to shape the future. But again, with that degree of knowledge, you know (100% accuracy) what will come of it. Hence, they say there is no free will, we are just chemicals reacting to our enviornment.

To know with 100% certainty what our actions will be is kind of like knowing the game beforehand. Was my analogy that bad?

“Pretty certain” is not 100% certain, which has been stated as possible by others. They claim that by knowing everthing that will happen to you, you can predict what will be your “choice” with 100% certainty.

Then, with that knowledge, you can predict what comes next (and what comes next for the others around you), and so on. Life becomes a scripted play.
Sigh. I must confess. I appear to be unable to communicate effectively. I guess I will have to bow out now. sad panda face

I agree, but retribution and moral judgements aren’t unreasonable. They too would be the results of behavioural programming, making them just as neutral as the criminal behaviour itself is with regard to fault.

I’m sorry, but that analogy doesn’t make any sense to me. A fish lives in water and knows nothing except water (and is really, really stupid), so it doesn’t think about water. What are you comparing to water in your analogy, and how are we (pretty damn smart creatures) comparable in the analogy to something as stupid as a fish?

Because evolution isn’t teleological: it’s not directed by someone who knows what the best decisions will be. We weren’t equipped with a map for the future. We evolved with an ability to look at the things around us, including things that hadn’t been looked at before by our species, and evaluate them.

Given that highly flexible piece of equipment we evolved with, why wouldn’t we look at our decisionmaking process? Why would we evolve such a specific exception to our ability to evaluate the things in our environment? Why would such an exception lead to greater reproductive success for the humans that came with it?

We look at our will because we look at everything. That desire and ability to analyze everything is what’s made us such a successful species.

Daniel

You’re tring to make an argument for “me” as unique entity which will make unique choices, but that’s still not an argument for free will. You still can’t determine youir own will.

I’m quite sure I’ll never know the answer to the big “why” question. But if you’re asking “how did humans develop self-awareness” then I suspect Darwinian logic might help provide an answer. Keep in mind that characteristics are not goals of evolution, just the result of it. And not all characteristics play a role in selection either. It could be that the mental ability to avoid being eaten happens to also carry with it a seemingly useless side-effect of self-awareness. Did you know that some other species (apes, elephants, dolphins) are also self-aware?

Just marking time? We may be tiny cogs in a tremendous mechanism but we’re amazingly, fascinatingly wondrous cogs. And the mechanism is even more amazingly, fascinatingly wondrous. (At least that’s the conclusion we’ve been given. :wink: )

Why shouldn’t it?

And “society” is just another environmental factor that ‘causes’ our choices, correct?

I think I have two problems with saying that morality is defined by society. I’m just kind of thinking aloud here.

First, how do we then judge a society? We say that slavery is wrong, immoral. But many societies in history supported slavery and rewarded slaveholders for doing so. So by your argument, we cannot say that slaveholding is immoral? Or at least, we cannot say it was immoral at the time that it was condoned by society? If, today, I lived in a country where slavery was commonplace and generally encouraged, would choosing to own slaves not be immoral because my society has said so?

Related to this, would it be immoral to fight for emancipation, since society has defined slavery as okay?

So, getting back to the free will debate - it seems to me that if there is no free will and everything is determined by stimuli, that we 1) can’t judge the morality of any action and 2) can’t hold people responsible for the “immoral” decisions they make, since no real choice was involved.

Quite possibly that’s precisely what has happened, and that’s why we are wired to believe in free will; believing ( or acting like you do ) in free will may, as I’ve been saying, simply be more efficent than not believing.

That’s why I always define “morality” as a personal aesthetic rather than an objective absolute.

Essentially, “morality” describes individual emotional responses to himan behavior – not only to the behavior of others but to our own as well. No two individuals have exactly the same “moral” responses but some of them are common enough or similar enough that they can affect social contracts. Humans are evolved to survive in populations, not as isolated individuals. Therefore, some of our emotional programming (by which I only mean evolution) has developed in a direction which helps to protect entire communities rather than just serve the interest of the individual. For instance, most of us are equipped with an empathic response. We feel anxiety, distress and anger when we see others of our own species (especially of our own "communities) suffering. Since this response is near universal, and since we are culture-bearing animals, we are able to agree to certain kinds of parameters to behavior and sanctions for those behaviors which cause distress to the majority or destabalize the community.

In other words, choices are not morally meaningless, it’s just that morality is only meaningful within a certain context. If it’s meaningful to YOU, it’s meaningful. Taste in beer has no objective meaning but that’s not the same as saying that Guinness doesn’t taste good. Human choices can be defined as “moral” as long as it’s understood that “morality” is defined by the emotional responses of other human beings.

It might still be argued that individuals cannot be said to have any ultimate moral accountability for their choices, and I would agree, but they still have cultural accountability and cultural sanctions for behaviors which adversely affect communities are one of the tools by which we can alter or deter those behaviors. In other words, cultural “morality” can actually affect individual will.

Correct. The many limits that society wishes to implement is the stuff of its morality.

“Choosing” to own slaves would not be immoral in your society’s mind.

It is possible that slave holding and fighting for slave emancipation are both morally acceptable in that society. But if that society held that such a fight was immoral then yes, such a fight would be immoral held as immoral to that society.

We can’t judge and hold people responsible based on an absolute morality. But Societies (and groups, and individuals) can do so based on their own morality as a means to affect the programming of others.

The mirror effect (and any other effect) in our brain is (as far as scientists can tell so far) due to the activation of the neurons, electrical signals, chemical signals, etc.

Yes it is complex, very complex, but it is still just a physical process with cause and effect.

Seems to that “free will” is just another way of describing the state of “consciousness”. And if it seems that it’s a logical contradiction for “free will” to exist, I’d say that the complexity of the mind (brain) is such that it’s close enough as to be indistinguishable from a state where “free will” does exist. This reminds me of the thread wherein posters claimed that some things exist that cannot be empirically observed. That, to me, is indistinguishable from non-existence.

What’s “free” about it?

My “will” is determined by me in relationship to my context. The context can’t do it without me.

Perhaps you could try to formulate an example of what free will would be if it existed, so I can better understand what it is that you’re saying doesn’t exist?

Free will would be the ability of choosing a selection in despite of what you would prefer to do.

Say you’re given a choice between chocolate ice cream and vanilla. Let’s say that you really love chocolate, but hate vanilla. There’s no reason for you to pick vanilla. Free will would be picking (or potentially picking) vanilla anyway. It’s going directly against what you’d like. To carry it on, if you say “Ah, but I want to pick vanilla anyway, just to show you’re wrong” then that would be the option you’d most like to select, and free will would be your ability to not do so. It’d be the ability to completely ignore all the influences affecting you.

So you can see why I think a reality with free will would be a bad idea. We’d be choosing to do things that we don’t actually want to do. And personality still exists, so not only would we be picking the things we don’t want to do, we’d still be upset about it.