Late reply, but I hope it’s still relevant.
I was never wholly in one camp, but I definitely used to lean closer to utilitarianism when I was younger, and definitely lean closer to the deontology camp these days.
A few things:
Utilitarianism is, presumably fact-based. It’s the sciency nerd perspective. That’s what used to draw me to it. Why do all that boring philosophy when you can just calculate the right answer? Best outcome for the most people, what’s the problem? But it’s naive for a couple of reasons.
One, morality is not fact-based. It’s entirely subjective. Science can, presumably, tell you what will happen if you do X, and what will happen if you do Y, but you have to make a subjective, personal decision about which outcome you prefer. So you still need guidelines for deciding which is the morally preferable outcome, and “greatest good for the greatest number” is awfully vague, and it kind of falls apart on closer examination, rather than firming up. Examples abound.
Here’s one of mine. If preventing suffering is how you interpret “maximizing utility” or whatever, you could make a very good case for mass human extinction,if not sterilizing the planet entirely. Any suffering caused today would be balanced by untold infinities of lives that were prevented from experiencing any suffering at all. If you say “death counts as suffering, and obviously nonexistence is as bad as death” you’re making a rule to judge the hypothetical by. You have to twist definitions to get it out of the trap.
Of course if you’re less worried about preventing suffering, and more about maximizing pleasure and happiness, that could easily lead to Brave New World, or routine lobotomies, or whatever. You’d have to make rules to exclude those kinds of options from the calculation.
And two, science can’t actually tell you what will happen. Outside of controlled laboratory experiments or cyclic astronomical phenomenon, science can’t actually predict much with any certainty. It’s all about statistics. And anyone with any experience with statistics will tell you it’s extremely complicated, mysterious, it’s easy (even for PhD scientists and mathematicians) to be led to false conclusions or misguided, and in general it’s a pretty terrible way for average people to make personal decisions or understand the consequences of their behavior.
There are gazillions of examples of unintended consequences I don’t need to go into. And in practice, the average person needs rules to go by, even if they’re informed by statistics and utilitarian considerations. They can’t run these numbers themselves.
Also, I think deontology leads to different kinds of decision making. I think of recent incidents where innocent minorities were murdered, by police or civilians, and they claimed they feared for their lives. I’m as big a proponent of the right to self defense as anyone, but I can’t see myself gunning down a man carrying a toy gun in Walmart, even if I thought it was a real gun. I can’t imagine shooting a kid with a BB gun in the park, even if I thought he had a real deadly weapon.
That’s not enough for me to fear for my life, even if my puny racist brain couldn’t fathom a black man openly carrying a weapon in a public place. It needs to be a legitimate fear, and an imminent threat. You can’t just say “he might have killed me!” But that’s all statistics gets you. What might happen. It’s just not capable of resolving most moral questions, in my opinion. Certainly not in the moment. You need rules for that. Self imposed rules, rules from an outside authority, like the church or your parents or a public figure you look up to, but rules either way.
And in general, if it’s a risk one way or the other, I’d much rather risk my life and avoid being a murderer, than risk another person’s life, and become a murderer. I’m going to err on the side of not becoming a murderer, even if that increases the risk to my own life. I’d just rather be dead than a murderer, you know? To me, that line of thinking is definitely deontological.
I don’t think either is sufficient on their own, you need both to make an informed decision. But in practice, i think it boils down to personal rules we place on our own behavior. Statistics, utility, or a consideration of the consequences simply helps guide our own personal rule set, ahead of time, before the actual decisions are made.