Would you give up free will for world peace?

Very well possible, of course! That’s why I keep trying to find simple, streamlined scenarios explaining the arguments in their most basic form, like giving the logical form of the argument, the coin toss analogy and the post with the diagrams above, so that people can easily point out where they believe those go wrong. However, so far I haven’t gotten any takers, but plenty of all caps and rambling about how I’m too dumb to understand natural selection, so my understanding is all I have to go on.

No, again, that’s not what I’m saying. I’m saying that if vision and echolocation have the same adaptive benefit, natural selection can’t distinguish between them, and thus, if we end up with vision, it’s fallacious to say we did so because of the adaptive benefit it provides.

This is not true, unless vision and echolocation are mutually exclusive traits. Bats are not, in fact, blind.

All that needs to be true is that vision is better than “no vision” or other mutually exclusive traits that preclude the presence of eyes; and that echolocation is better than “no echolocation” or other mutually exclusive traits that preclude echolocation.

Yes, that’s right, I was just using the example I was given. It’s just a minimal application of the principle of charity to account for that, but if course I should know better than to expect such.

It’s not a question of being incharitable, I simply have no idea what point you’re trying to make, so all I can do is point out what errors I see.

So you would agree that all that is required for “sense of free will” to be adaptive is that it be better than MUTUALLY EXCLUSIVE traits that include “no sense of anything much”, “sense of compulsion” and maybe a couple of other things. But not all traits that might ultimately lead us to obtain (say) more food.

Again, that’s why I posted the detailed and simplified scenario above, where it’s clear that, e.g., you can’t simultaneously both love and hate salad.

And it appears to me that perhaps the problem is with the idea that there could be a necessary causal connection between the mental state “like salad” and the action “eat salad” that would not exist between the mental state “hate salad” and the action “eat salad”.

I think that’s worth discussing, but it has absolutely nothing to do with any adaptive hypothesis or natural selection. It’s about what subjective mental states are, why they exist at all, it’s neuroscience.

Ok, explain this to me: how could the sense of free will be ‘better’ from an evolutionary perspective than a sense of compulsion, provided the latter leads to all the same behaviors (without any additional downsides)?

It couldn’t. The hypothesis, as has been made abundantly clear to you, is that it does NOT lead to the same behaviors. And that any explanation for why this is the case has nothing to do with natural selection, it’s neuroscience.

Because you are presenting two scenarios for comparison.

In the first, we have a subjective experience and a sense of free will because this provides some kind of benefit to our fitness. For example, it lets us simulate future scenarios, or aids our reasoning, etc.

In the second, we have subjective experience that’s quite complex. It is capable of doing everything our subjective experience does as humans. But it serves no purpose, because despite being capapble of wanting something, some external force compells it to want something else instead.

Our subjective experience could be described as a machine. Given a certain set of inputs, it outputs a result. In the first model, where our subjective experience and sense of free will work together, our subjective experience has a role to play. It takes various inputs and outputs a result which is acted upon. We don’t have any control over this process, which is why this sense of free will is an illusion; but we can see the psychological or neurological process our sence of will fulfills.

On the other hand is your proposed scenario where our subjective experience is just as complex and capable of desiring actions. But instead of using this machinery, your alternate mind comes up with actions through some other mechanism entirely, which is a black box, and then imposes that will upon the subjective experience.

This is hideously complex and means that the subjective experience’s ability to make decisions - to “want” A while it is compelled to do “B” - simply makes no sense at all.

I simply cannot imagine a scenario in which such a being evolves. So as much as you say that we should assume that either path is open, that’s not how evolution works. One path posits a mental structure that evolved to do something that helps the creature survive. The other posits a mental structure that evolved to give a cresture an existential crisis that it cannot even act upon.

Now, one could imagine a creature with no subjective experience, which simply acts in response to stimuli. I don’t think you need to try very hard to imagine such a creature - they are all around us. Plants and microbes, for one.

But a creature isn’t going to evolve a structure (even a mental structure) as complex as a subjective experience unless it either serves a function or is a prerequisite/side effect of another trait that does.

Further to this, to go back to @Half_Man_Half_Wit 's toy “salad” model of mental states…

It is extremely difficult for evolution to “hard code” all the complex behaviors with all the contingencies that arise when negotiating a complex environment. The solution is to define reward function that incentivizes broad goals like “have sex”, “find food”, while leaving the exact details of how to attain those outcomes to higher cognition, which can modulate behavior to take into account current environmental conditions.

I think subjective emotional states are what those reward functions feel like.

So to take the toy “salad” model, suppose that a positive reward function is associated with the action “eat salad”. The subjective mental states “like salad” and “hate salad” are not independent things that each could be associated with the reward function. The subjective mental state “like salad” IS the subjective experience of the reward function.

So it’s meaningless to ask why subjective mental state “hate salad” couldn’t just as easily be causally linked to “eat salad”. If it were, by definition we’d call that sensation love, not hate.

What hasn’t been made clear to me, however, is why anybody should accept this hypothesis. Suppose somebody comes to you with the hypothesis that the best explanation for the way things are is that God created the universe last Thursday. You point out that it also might have been created 13.8 billion years ago using the mechanism of an inflaton field decaying to a stable vacuum state. But then they say, but well, the hypothesis is that God created it last Thursday, so that doesn’t count. Would that hold any water with you?

Likewise, I’m not terribly impressed by the hypothesis that the illusion of free will leads to a unique behavioral fitness in the face of the fact that there are a multitude of simple scenarios that seem to confer the same benefit, and no amount of harping on that it’s the hypothesis that this isn’t the case is likely to change that.

On this, I agree. But again, the point is still finding a reason why we have an experience of freedom, if we’re not actually free. To just posit the hypothesis that it’s due to adaptation simply doesn’t do the work.

This is tendentious. Equally well, one could say that in the first, we have a complicated simulation apparatus that generates elaborate projections of alternatives that it presents to us as if we could opt for them, while in fact, none of that does any work and what we’re going to do just follows from a deterministic calculation. On the other, we simply feel a compulsion to act according to the outcome of that calculation. There is no need to have any will to do otherwise to feel compelled to do something.

Besides, I have given other options, like the awareness of the calculation itself. And I can’t say I’ve tried especially hard to come up with more, there are certainly other options out there. To claim that they’re all reproductively inferior to the free will illusion simply pretends a level of knowledge we can’t possibly have.

I don’t see where there’s any use for a subjective sense of having been able to do otherwise at all. Why don’t we just happily take the one road before us? What use is it to me to pretend to myself that I could’ve had peanut butter instead of jam on my morning toast if I actually couldn’t? Why bother with all that? Why not just make me aware of my only course of action?

But that’s again just begging the question. There must be some benefit to it, else we wouldn’t have evolved it. Hence, we evolved a sense of free will because it has evolutionary benefits!

So you’re saying that there must be a one-to-one correlation between mental states and behavior? I.e. that mental states are only distinct in so far as selection is able to act differentially on them?

I’m saying that the subjective experience “love salad” IS the subjective experience of a positive reward function associated with the action “eat salad”.

I dont know what you mean by one-to-one. The whole point of the reward function model is that there can be a very large number of reward functions targeting potentially conflicting goals like “get sex” and “dont die” operating simultaneously, and that we integrate them in our actions at any given moment.

Well, exactly this. This was what I’ve been trying to get across with all my talk about necessity you opposed so loudly, when I pointed out that the only way to have the argument not be a trivially fallacious affirmation of the consequent would be to stipulate that there is effectively no distinction between the mental state and the reward-generating behavior, to have one determine the other, and vice versa—have them in one-to-one association.

However, this seems deeply implausible. At the very least, you must agree that an organism that shows the same behavior while having different mental states is possible, can be consistently imagined. If this were a simulation, one could certainly code up such a being. But then, at best, it might be highly unlikely that such an organism could come about through natural selection. It might require a spectacularly unlikely leap in the genetic landscape.

Furthermore, it’s clear that there are differences in genotype that don’t lead to differences in phenotype. And it’s a long causal chain from DNA to behavior. It’s certainly possible for there to be differences along that causal chain, that still don’t yield differences in phenotype. Anything else would be a bit like saying that, while microevolution occurs, there’s no such thing as macroevolution: how would these differences know not to propagate beyond the set boundary?

So there are differences in mental state that don’t lead to reproductively relevant differences in behavior (nobody’s a behaviorist anymore), and there plausibly are differences beyond the base genetic level that don’t lead to phenotypically relevant differences. So why shouldn’t the two connect? Why shouldn’t there be differences in genotype, that yield differences in mental state, that don’t yield differences in behavior?

I can see absolutely no reason that should be the case. On the contrary, I can see lots of just-so stories that end, instead of us having a sense of free will, with us just experiencing the one inevitable outcome of our subconscious calculations as the only course available for us to take. These don’t prove anything, of course, but neither do those in the other direction.

So I see no reason, at all, to believe in an one-to-one mapping between mental state and behavior. It would seem an almost conspiratorial situation if that were the case, almost enough to make one suspect a high degree of fine-tuning, which then itself would be in need of explanation.

Except that this is patently absurd and provably false.

If I start thinking about sad things, my body will experience a physiological response. In other words, under this model despite the fact that I can’t actually choose other than I do, the modeling is impacting the outside world. If I wasn’t doing the modeling, or if the modeling had gone differently, I would have acted otherwise.

My lack of control is due to the fact that I cannot control the simulator’s inputs, therefore it will always give the same results - the lack of control does not mean that I don’t have any impact on what is doenstream, only that I can’t control the inputs to my mind and thus can’t control the outputs, even though I certainly influence the outputs.

I can imagine a creature that just feels compulsion and acts on it. I just can’t imagine why that creature would have the ability to perceive the world nearly as we do, with a subjective experience that is overriden by some external force.

I doubt the sense of having been able to do otherwise is beneficial in and of itself. It’s a consequence of how we perceive the world and ourselves within it.

I think there are different genotypes that yield differences in mental state that don’t yield differences in behavior.

For example, we could imagine two creatures, A and B. A really loves other creatures and gets pleasure from helping them. B doesn’t feel pleasure when it helps others, but feels immense guilt if it doesn’t. Both creatures end up behaving altruisticly.

However, the fact that both of these creatures can exist doesn’t mean that PhilosophicalZombieCreature can exist. PhilosophicalZombieCreature is physically identical to creatures A and B, but instead of determining whether to behave altruistically through a reward function created by emotional states, it randomly decides how to behave and it then compelled to do so. It is capable of having emotions, and of feeling good or bad about its actions. But it cannot control them.

Just because Creature A and Creature B exist, and we can imagine PhilosophicalZombieCreature existing without changing the observed behavior, does not mean that we can assume that PhilosophicalZombieCreature was equally likely or even possible.

Further, if you have the subjective experience of being Creature A, you could reasonably theorize that creatures with the mindset of Creature B exist; their minds would require only a minor modification from yours. But PhilosophicalZombieCreature would be something completely alien, and you’d have no reason at all to postulate its existence.

Sorry perhaps I should have specified that I was speaking of the Christian God. Generally, this being is spoken of in terms of being male. I was raised as a Christian, so Yahweh is my concept of God.

Perhaps I should have said appeared or used another word indicating that God is visible rather than saying revealed. Sorry in the future I will try to be more specific with the terms I use. My intention was to state that God will be Cleary visible and can be heard by everyone on the planet not just an auditory thing.

Nope. I don’t know why you think this makes rational sense. A trait is selected for if it is beneficial to the organism or species, it doesn’t matter at all what can be said of other traits.

From a god’s eye view there are probably infinite solutions to any problem, that has no bearing on how evolution / adaptation work.

Being charitable (since you decided to use this word), if an organism already has a trait A then if there is also a trait B that is no better or worse, then there is of course no selective pressure towards B, though of course an organism may arrive at B through genetic drift. I don’t think anyone has suggested that we began with a subjective feeling of compulsion and evolved across to another subjective feeling though.

Also I should add, this is all an aside for me. I don’t subscribe to the theory of subjective free will having adaptive benefit, I think it’s more likely to be epiphenomenal.

That’s begging the question. Suppose you could walk through either of three doors. There is, you propose, some calculation that you could do to choose one in such a manner as to, in the aggregate, obtain a certain selective benefit. One way is to do that calculation and include some virtual reality display of presenting options as if you could choose between them, then selecting one, while instilling some sense that you could have equally well chosen one of the others. Another way is to do that calculation, skip the VR theatrics, and present you the outcome as the only possibility. Yet another would be to do the calculation, include the VR, and then inform you that there is only that one possibility to take. As long as all of those end with the same door being chosen, there’s no difference, reproduction-wise. And once again, it’s not obvious to me that there is any reason to choose the first possibility over any of the others.

Maybe, but why does this lack of control need to be papered over by an illusion of control? That’s the question we’re talking about!

Well, there’s no reason it should. It just feels compulsion and acts on it!

Then the argument from selection doesn’t work. Its purpose is to argue that we do have that illusion because it is beneficial.

Again, this isn’t something I feel at all confident to judge without just resorting to vague handwaves and appeals to intuition. Maybe, maybe not. But of course, my argument doesn’t depend on that particular alternative to the illusion of free choice. I’ve given others, and there are many others that are imaginable.

But we know of limited cases that are just like that, so it only needs a simple extrapolation. If I hold my breath for a while, the compulsion to breathe becomes insurmountable. If I am really hungry, I’ll be compelled to eat whatever’s in front of me. This doesn’t entail a wanting to do otherwise: I’d certainly also want to eat that, but I won’t have any illusion of being able to do otherwise. There’s no reason, it seems to me, that this sort of situation couldn’t extend to any choice.

Or think of something like ChatGPT. Do you think it experiences any illusion of having been able to say anything else than it did (if it had any experiences at all)? Do you think that its capabilities would be in any way enhanced if it did? If it has any experience, it might be that of a set of possible tokens to emit, with some likelihood attached to each. It samples from these according to the probability distribution; and so that explains completely each choice it makes. Why would it need any tacked-on module that tells it, you could’ve totally chosen differently?

But that doesn’t do jack for the explanation of why we have that trait, if there are other (mutually exclusive, as @Riemann pointed out, so the example isn’t great) traits that yield the same reproductive benefit. It’s again the coin toss example: suppose if the coin comes up heads, I give you a million dollars. But if the coin comes up tails, I also give you a million dollars. Now, the coin does, in fact, come up heads: does that mean you can say that you got a million dollars because the coin came up heads? Obviously not: you can vary the way the coin comes up (perform an ‘intervention’ in the lingo of causal modeling), without varying the outcome. So what the coin comes up as is completely decoupled from the outcome.

Likewise if two traits confer the same fitness benefit. If you do actually get that benefit, it’s meaningless to say that you got it through trait A being selected, because you would’ve gotten the same benefit from trait B. Natural selection just doesn’t differentiate between them.

That’s not my word—the principle of charity is a simple guideline in debate that ensures you don’t inadvertently attack strawmen. Following it is to the benefit of the person attacking an argument, not the person making it and having it interpreted charitably.

And if an organism neither has A nor B, there is no selective pressure towards any specific one of these, but just towards the fitness benefit either confers equally well. If a hairless creature moves towards colder climates, there’s a fitness benefit to be gained by evolving fur, but not through evolving brown fur as opposed to white (supposing there are no other environmental advantages due to either). If the population then acquires white fur, it’s meaningless to say that it did so through the survival advantages this produces; the evolutionarily relevant distinction is just fur vs. no fur, not white fur vs. brown fur.