Does it violate free will if the person was never given the desire to do something in the first place?

While I do respect a solid sense of self-importance, unfortunately, what gets demonstrated to you to your satisfaction still isn’t the arbiter of what is or isn’t possible.

Sure. They just don’t.

And if it just keeps on doing so, what distinguishes it from an omnibenevolent one?

And so does an omnibenevolent one. They just don’t do it. Still not getting what’s difficult here.

If omnibenevolence is a subset of omnipotence, as you claimed, that’s not logically possible, because then every omnibenevolent being would necessarily be omnipotent.

But since I don’t hold to that, there is, of course, no contradiction between being omnibenevolent and not omnipotent. Omnibenevolence is just a description of how one acts, both omnipotent and non-omnipotent beings are able to follow this course.

These were, by and large, people who either directly wrote or are fluent in Latin. They did absolutely use the word as what it means; otherwise, if they’d meant to write ‘always acting for the good’, they would have done so (omnibenefacient, maybe). But one doesn’t have to dig far to find that we’re indeed talking about what God wills—read the first three articles of Aquinas’ Summa Theologiae, for instance. He also quite explicitly rejects your idea that because God’s will is the same for every act, that somehow means that it must be constrained in some way:

From the fact that God wills from eternity whatever He wills, it does not follow that He wills it necessarily

Why would I ever be required to eat a banana, if I don’t like bananas? If I don’t like them in every situation, I don’t eat them in every situation. There’s no valid inference from there to me somehow being unable to eat bananas. I could if I wanted to—I just don’t ever want to.

They can always choose to do evil, they just never do. Still not getting how that’s difficult

I do, which is why your continual insistence that one must sometimes do a thing to prove that one can do a thing—that there must be sometimes instances where I eat a banana or else, I’m unable to eat a banana—is so puzzling. You want two contradictory things: on the one hand, you affirm that not doing x does not entail an inability to do x, on the other, you want to say that not doing evil entails that one can’t do evil. Both can’t be true simultaneously. Hence, my attempt to tease out exactly why you would hold to such a contradiction.

I mean, just proceed by induction. An omnipotent being is faced with a choice between e and g. They choose g. That choice, by your own admission, doesn’t impact their ability to choose e, and hence, their omnipotence, in the slightest. So we’re back in the same situation as before: the omnipotent being is again faced with the choice of e or g. Again they choose g. Again their omnipotence isn’t lessened. And then so on: after the nth choice, if their ability to choose between e and g hasn’t lessened, neither does it do for the n+1st. Hence they never loose any omnipotence, always choose g, and are thus both omnipotent and omnibenevolent.

And again, I need not settle whether a human is a featherless biped to argue that all humans are mortal. It’s not necessary to settle every detail of free will before we can intelligibly talk about it.

This analogy does not work. Fictional worlds are not morally relevant domains. Evil in fiction is an aesthetic device, not a real harm. There are various reasons why we can’t compare fiction with real life and draw a conclusion about the nature of god. The most important, in my opinion, is that fiction is governed by aesthetic logic, not moral logic. Fiction is self-referential and intentionally contradictory. It is just an aesthetic artifact, not a moral environment (like the real world).

The classic theologians (operating outside the framework of utilitarianism) characterize evil acts as the result of deficiencies which a perfect being would not suffer. The power to act only results in evil acts when exercised imperfectly, and God by His nature would never act imperfectly.

Summa Theologica I, Q25

To sin is to fall short of a perfect action; hence to be able to sin is to be able to fall short in action, which is repugnant to omnipotence. Therefore it is that God cannot sin, because of His omnipotence. Nevertheless, the Philosopher says (Topic. iv, 3) that God can deliberately do what is evil. But this must be understood either on a condition, the antecedent of which is impossible—as, for instance, if we were to say that God can do evil things if He will. For there is no reason why a conditional proposition should not be true, though both the antecedent and consequent are impossible: as if one were to say: “If man is a donkey, he has four feet.”

~Max

There are multiple fictional allegories and parables in the Bible which may be compared against the real world to learn moral truths and thus the nature of God. Unless I am deeply mistaken, Jesus made up the good samaritan, lost sheep, &etc.

The takeaway from my analogy is that the existence of evil within a universe does not logically necessitate the creator is evil, even by the moral standards internal to that universe. The creator may have more perfect knowledge which justifies his acts.

~Max

If I am not mistaken, the traditional response is that Pharoah acted of his own free will despite divine intervention to “harden Pharoah’s heart”. So that would be the compatibilist, not libertarian approach.

~Max

In traditional Jewish theology, since the sixteenth century, the yetzer hara (evil inclination - not an evil entity, but the capacity to choose evil) is a fundamental part of human nature - existing even before partaking of the forbidden fruit. Creation as described in Genesis was a chaotic process of constricting light, and God’s infinite light exploded all over the place. Now sparks of holy light are trapped in the material and spiritual world, and God needs our help reaching into the dark places to free His sparks. The Torah and 613 mitzvot (commandments) exist to guide humanity to that end. When a human goes into a dark place, he is tempted by the yetzer hara. Only by overcoming the yetzer hara to perform meritorious acts (the mitzvot) can he elevate the divine sparks and repair the world. Without overcoming the yetzer hara, there is no merit and thus sparks are not elevated. The struggle to overcome yetzer hara in pursuit of tikkun olam (repairing the world) is the central purpose of humanity.

~Max

Parables recommend attitudes and behaviors, falling under normative ethics where values and motivation matter rather than data and rigorous explanations. These stories cannot be used as evidence about the physical world or divine nature.

Firstly where did that second premise of necessary harm come from? It’s sneaking in part of the conclusion.

Secondly, speaking of what I know is once again slipping into claims of knowledge or proof. I don’t think it was intentional, but I’ll reemphasize again: I am talking about confidence and doubt; the terms in which we make claims about objective reality.

Just as each act of kindness would give us reason to gain confidence that it is a kind host, so the opposite goes. None of this would be contested outside of this thread.

I’ll accept that it wasn’t intentional, but your example here is terrible. Because your conclusion is about your own action, not the thing that is uncertain (the weather). If I were to conclude that it will rain, that would be flawed logic, right?

No, I haven’t made either claim. I have simply doubted the claim of perfect kindess with each additional data point.

Again, this reasoning from empirical observation is something we do every day. The “set of universes” logic would leave us paralyzed to evaluate claims.

In fact, now that I think about it, the set of worlds logic is still a problem here, even within this new argument about needing bases of comparison. Because eg having the experience of visiting a million homes would still mean being in a set of universes where the million + 1 home has an infinite number of reasons to spit in my face while still actually being perfectly kind. Why would being spat in the face make me doubt that it’s a kind host?

It’s just what makes this scenario analogous to the God/evil one. You know that a ‘lovely host’ prefers no harm come to their guests, but you likewise know that there are cases where such can’t be avoided. This can e.g. be argued for by pointing to the possibility of having hot coffee spilled on you. So cases where harm comes to a guest despite being hosted by a lovely host exist. This also makes it clear that there is no positing of external facts: it’s merely inferring from a particular case, an example, to a general one, just like how discovering a specific black swan licenses the conclusion that black swans in general exist.

Sure, but your knowledge is simply anything you use to come to any particular judgment—that doesn’t entail that those judgments are intended to be certain. In general, they won’t be, because your knowledge is incomplete. Knowledge just influences the way you bet, and in saying that you probably won’t come to significant harm upon visiting a lovely host, you’re making a bet that’s informed by your knowledge. The question is whether the knowledge you have actually provides rational grounds for the bet you’re placing.

Your argument is essentially that knowing your host is a perfectly lovely one, and thus, wouldn’t want you to come to harm, allows you to conclude that you should expect minor harm at most, even knowing that sometimes harm is inevitable. Thus, if you experience major harm, this constitutes evidence against the thesis of a perfectly lovely host. However, this isn’t true: the information you have fails to support this inference, because it doesn’t tell you anything about the amount of harm you should expect, beyond it being non-zero on average.

That holds only if you expect exactly zero harm to come to you; then, every instance of harm moves the needle further towards ‘probably not such a lovely host’. If you have any positive expectation of harm (as you should), then harm exceeding this lowers your confidence in the loveliness of your host, while harm below that threshold increases it. And that’s the problem in a nutshell: while you know that you ought to expect some harm on average, the knowledge you have doesn’t allow you to formulate any expectation of the amount of harm. If you expect it to be small, then that directly means you’re making additional assumptions.

But this doubt is only rational if you have grounds to assume that the average amount of harm, give that you’re visiting a lovely host, is exceeded. But you don’t have those grounds, unless you make extra assumptions.

This ‘set of universes’ logic is exactly the basis of the sort of inferences we make every day. We usually do so intuitively and without examining our premises, but this is merely a formalization of what it means to rationally update your beliefs given new data, and what sort of conclusions are licensed by the information we have. This is necessary to interrogate whether our conclusions are actually rational.

Because having visited a million homes, you’ll have a pretty good estimate of the amount of harm you should expect, thus you can gauge whether this instance exceeds it, and update your beliefs accordingly.

I’m not saying it’s necessary for your argument. I’m saying that the topic of whether having a desire removed/ prevented means one doesn’t have free will is literally in the title of this thread.

If you had said “That’s unnecessary for the FWD,” I wouldn’t have said anything. But you said it was beyond the scope of this thread. That just isn’t true.

As for the argument you two are having, let’s look at the definition of benevolence.

Google AI says in part

Well-meaning is about intent and desire.

The Merriam-Webster online dictionary has this comment:

The Latin word comes from bene, good, and vol, i.e. volition, which is “the power of choosing”.

Benevolence is all about choice, not a limitation on power to act.

So the argument has to do with omnibenevolence, and whether or not always choosing to do good is a limitation on the ability to do evil.

@Voyager is arguing that a being who by nature always desires and chooses to do good is unable to do evil .

@Half_Man_Half_Wit is arguing that always choosing to do good does not mean the ability to do evil is not there, just the wish to do evil is not there.

That feels like an ontological disagreement.

@Half_Man_Half_Wit is saying there is a difference between power (ability) and will (desire, choice).

@Voyager is saying that a perfect desire to do good means there’s no ability to do evil.

That feels like an impasse to me.

It is true that the strict outcome of the FWD leaves the quantity of evil allowed unlimited.

However, @Mijin is discussing the likelihood, not the possibility.

So is @Mijin using additional data to make a determination? Yes. Does it negate the FWD? No. But it does reduce @Mijin’s level of belief in the tri-omni god.

I think you are both in agreement here but stuck on the framing of what is being said.

@Half_Man_Half_Wit called it the “depraved world” argument.

@Mijin is using everyday reasoning of comparing experience to expectation.

Where that expectation comes from @Half_Man_Half_Wit is calling extra assumptions or data beyond the FWD.

It’s a difference, to me, whether a desire is interfered with or whether it’s not there from the beginning, and the implications of the latter on free will aren’t necessarily the same as the former. Also, my own angle on the OP was rather that the options between which one freely chooses—whatever that may mean—may play a role in determining the overall moral value of the world, so whether or not such a limitation has influence on freedom is not necessarily the only point under consideration. But I recognize that this is just my approach, so I’ll take your point.

That’s not quite the right way round: I’m arguing that an omnipotent being could, by virtue of omnipotence, just choose to maximize the good in all their actions, and hence, be omnibenevolent as a result; since this is at least possible, the two notions aren’t logically in conflict. Otherwise, if choosing a particular way of acting limits omnipotence, then there’s just no such thing as omnipotence.

Sure, that’s all I’ve been saying: @Mijin is making an entirely valid evidential argument against the existence of God. However, all such arguments, by necessity, need to appeal to additional assumptions. It’s @Mijin’s denial of this, and in particular the idea that simply observing a given magnitude of evil just argues against the existence of God on its own, that I’ve been arguing against.

I’ve never said that omnibenevolence requires there be no evil - just a minimal amount of evil.
There are two types of plotters - ones who plan everything in advance, and ones who just write and hope it all works out in the end. If you, as author, knew exactly how your book was going to turn out in every detail, and are basically just writing it down, can you modify your plot in the middle? If you can, you did not know your book as well as you thought. If not, your writing process is just transcription from the story in your brain, and you do not have the ability to change your storyline.
I don’t know what type of fiction you are hypothetically writing, but I blow up planets in mine, so I would hardly consider to be omnibenevolent if my characters are real in some universe.
And if you could make the mountain float, it wouldn’t really be too heavy to lift, would it. That’s a standard argument against an omnipotent god, and I agree it isn’t a good one.

Actually I’m saying that an omnibenevolent god is constrained to do minimal evil, not no evil. And I tried not to use the term evil, which is not well defined.
I’m confused about omnibenevolence, to be frank. Is an omnibenevolent being able to increase suffering? If not, how not? If it does increase suffering, does it expire in a puff of illogic or does it just become non-omnibenevolent? An omnibenevolent being seems not to be doing anything, just not doing some things.
Certainly just benevolence is a characteristic of a thought or action, and not an absolute.
A doctor who gives a child a painful vaccine to protect them from a disease is certainly being benevolent. But if there is a less painful way of accomplishing the same goal, the doctor is not being omnibenevolent.

I’m not sure anyone is arguing that an omnipotent being can’t choose to do good. That’s in the definition of omnipotence. The question is whether it can choose to do bad. Can, not does.
One can never prove omnipotence through testing, since there is an infinite number of things you could ask the supposedly omnipotent being to do. You can disprove it, though. Disproving it by showing it is not logically possible for an omnipotent and omnibenevolent being to do evil (or rather, increase suffering) is what I’ve been talking about.

I just don’t see why that would be in question at all. Of course they can—they’re omnipotent, after all.

You’ve been affirming two contradictory things—that choosing A over B doesn’t mean one couldn’t choose B, and that always choosing good means one couldn’t choose evil. If I can choose good over evil once, I can just do it again, and nothing at all changes across repeated instances. So why shouldn’t it be possible to just continue doing so? Again, take the argument from my last post:

Where does omnipotence suddenly go poof? It’s just always the same situation: they have the power to do either e or g, they do g, but could’ve done e just as well—they just didn’t want to. None of that entails any limits on their ability. It always remains true that if they want to do e, they can; but the antecedent just never is true. This must certainly at least be possible, otherwise one has simply excluded omnibenevolence by stipulation, holding that any being must at least sometimes want evil—which is then circular.

It depends on how powerful it is. An omnipotent being cannot create suffering and still be omnibenevolent, since the amount of necessary or unavoidable suffering for an omnipotent is zero. They can simply make suffering not exist at all, and achieve any goal whatsoever without it.

The only reason for an omnipotent to inflict suffering is therefore sadism, which certainly isn’t benevolent.

Also, it would take much less than omnipotence to completely fool a human being. Once an entity is powerful enough to perfectly fool you, then it becomes impossible to verify their claims since by definition you can’t tell if they are faking it.

Not at all – we don’t know that there’s such a thing as necessary harm to an omnimax god. It absolutely is sneaking in the conclusion.

Nope. Not claiming knowledge and not asserting anything about the host. The point is simply that each data point of harm gives me more reason to doubt that it is a loving host.

Firstly, I note that you don’t want to talk about apparent acts of kindness give us reason to think the host is kind; because to deny this is yet another absurdity stacked on top of those already.
But secondly, no, there is no “threshold”. The expectation is simply that a good host is seeking zero harm to come to me, therefore every bit of harm that does happen gives me reason to doubt that it is a kind host.
It might well be that some harm is unavoidable; sure, but I didn’t claim certainty. My doubt might be misplaced if it turned out to be unavoidable. But that doesn’t make the doubt irrational, it’s just normal inductive reasoning.

No we don’t, because when it comes to empirical claims the set of universes is almost always infinite for any claim. It doesn’t work as a basis of reasoning.

Lol, as if this has been your argument all along. You only brought in this claim of “additional assumptions” two or three posts ago because you didn’t like the entailment that your actual argument implies – e.g. that we can’t even doubt a perfectly loving God if we were having our skin peeled off every day.

And if they choose to do evil, are they still omnibenevolent? The real issue is whether omnibenevolence is a characteristic versus just a description of the being’s actions up to this time. Not that any god existing today has acted like it is omnibenevolent, but that’s a different matter.

‘Necessary harm’ is your formulation, I said that there is sometimes harm that can’t be avoided—which is exactly the conclusion of the FWD. There are some cases in which there is evil, despite there being a tri-omni God, thus, evil on average is non-zero.

That is claiming knowledge, whether you like it or not: if you are saying that you can rationally decrease your assessment of the likelihood that your host is lovely, you’re saying that the harm you’re experiencing exceeds what you ought to expect in that situation, which is an item of knowledge. Those are just two ways of saying the same thing. All of this is just bog-standard updating of expectations upon being met with new information, i.e. Bayesian reasoning. (And before you complain about this just being an academic issue, this is just what you get if you make your reasoning logically compelling.)

The case of kindness functions exactly analogously. You have an expectation of the amount of kindness consistent with the hypothesis of a lovely host, and if what you receive falls short of that, you lower your credence in that hypothesis. And honestly, you shouldn’t really be the one complaining about letting points fall by the wayside.

No. Again, you know that there is some non-zero average harm you expect. Whether you increase or decrease your confidence in the hypothesis of a lovely host is then a question of whether that threshold is exceeded. If you receive more harm than you should expect, given that your host is a lovely one, then you can conclude that your host probably is not a lovely one. If you experience just the amount of harm you should expect, on average, from a lovely host, you have no grounds to conclude that your host isn’t lovely. This is just basic sound reasoning—everything else simply is fallacious.

Whether your doubt is rational is decided by whether you update your beliefs correctly on the basis of your initial knowledge/beliefs and the evidence you receive. The problem is that your prior knowledge suffices only to determine that there is some threshold of harm to expect, not where that threshold is. Thus any probabilistic determinations of the likelihood of your host being a lovely one is not supported by the information you have. Nothing here is about certainties, just about how one rationally assigns likelihoods.

In the end, all of this can be understood just by thinking about how one best places one’s bets, given the information one has. Failing to do so just means you’ll lose.

There’s absolutely no problem with infinite sets of worlds; it’s the relative proportions that matter. These yield empirically relevant likelihoods. In the example I gave, for instance, the size of the set G\cap E in proportion to that of E yields the likelihood that God exists in a world with evil. There may be infinitely many elements to each set, but such sizes are well-defined.

I first referred to that possibility 150 posts ago, and explicitly put it to you in post 218:

Besides, this is just a different way of saying that since evil and a tri-omni God are consistent, there can’t be sound reasons to think they are inconsistent.

I have no problem at all with that entailment; that I should is entirely your invention. (And again, the right way to say this would be that we don’t have any additional reason to doubt the existence of God just from the mere severity of evil; there are always good reasons for such doubt, but the reasoning of the FWD shows that if we want to doubt God based on the amount of evil we witness, we have to make additional assumptions/arguments about what amount of evil would be consistent with God’s existence.)

No; they are omnibenevolent because they don’t choose evil. The question makes as much sense as, would your wall still be green if you chose to paint it red? Each of the infinitely many points on your wall is green; by virtue of that, the wall is ‘omni-green’. If you paint some points red, it no longer is. But that doesn’t mean you couldn’t have painted it red; you just didn’t.

The real issue is what you think happens to omnipotence if one always chooses good. E.g., where does it go in this setting?

Not for an omnipotent there isn’t.