The train example has an interesting implication. We have a set of moral axioms, but nowhere does it say that these moral axioms are consistent. When they aren’t, we get into these quandries.
In this case, I suspect we have an axiom that saving a life is good, and saving n lives is about n times as good as saving a single life, and maximizing life saving is good. So we if could either pull one person out of the way or n >> 1 people, we’d probably do the latter.
But we also have an axiom that we should not actively cause anyone to be killed. If we have to actively kill 1 person to save n people, these axioms clash.
I don’t think we have one about letting people die through inaction, or at least not a very strong one. If we did charities would do much better.
Now a set of moral axioms supplied by God would have to be consistent, wouldn’t they, since they’d have to be perfect. If we can find that religious axioms are inconsistent, then we might safely say they either didn’t come from God or have been so badly distorted as to effectively not come from God.
I wonder if it would be possible to play moral Godel and prove that no consistent set of rules is possible.
Perhaps you mean something else, but a consistent set of rules is just as easy as obeying a consistent set of rules. Other than the laws of the nation we live in, there’s nothing preventing someone from living in accordance with the rules of Deuteronomy, exactly as they are written.
I think you mean “consistent and complete.” Complete meaning that all ethical statements are provably true or false, given the axioms.
Goedel’s theorem applies only to “sufficiently powerful” systems, and requires the axiom of choice in order to be proven. The axiom of choice only applies to infinite sets. And of course, it applies only to formal systems. My guess is that practical ethical formal systems are possible that are complete and consistent, but not sufficiently powerful to do certain kinds of advanced number theory (or answer ethical questions involving certain kinds of infinite sets). That said, applying formal systems to real questions can be tricky, even without infinites.
I’m being more Talmudic here. I agree that you can follow the rules without contradiction, but I’m talking about concluding whether an action not covered by the rules is moral or not based on the moral axioms. Lighting fires, for instance, are specifically covered by turning on a light switch is not, and rules about that have to be argued from the principles that led to the fire rule.
The train situation, as well as many other cases, is not covered explicitly by the rules.
I know what completeness means in math, but I don’t understand what it means in ethics. I don’t think we have to worry about infinite sets to deal with the problem I pose. Completeness was important for Godel because Russell and Whitehead’s goal was to prove everything; I don’t think anyone has a goal to prove all ethical statements true or false.
I’m thinking more along the lines of ways of showing that certain sets of axioms may be inconsistent in math, the easiest I can think of is that x / 0 = 0. We all know the contradiction you get from that one. Can you show certain sets of moral axioms similarly inconsistent, in particular ones we might accept on their own?
By proof I assume you mean ethical statements logically derived from certain sets of axioms. The problem with ethics is that many of the axioms are unstated, so the cause for conflicting conclusions (two people proving both A and ~A) are unclear.
I think we also have the expectation that killing a person should be done only as a last resort, and tweaking the (rather elegant) constraints of the original dilemma significantly weakens the argument that no other course of action was possible. I mean, seriously - we couldn’t come up with ANYTHING else to throw on the tracks?
After reading several Replies I realized that when Religious people say" faith based" they mean religion based. Atheists don’t base their morality on a Supreme being, but on how it affects other humans. Religious people base it on their particular God they seem to believe in, when it is really another human’s decision to call it a God.
No one can truly say God did or said or inspired anyone just a belief in that person!
Maybe, maybe not. It’s easy to conceive of unchanging divine meta-morals, but changing mores. There’s no reason why different cultures should have laws identical in every detail: in fact one would expect the opposite. This is easiest to see in the case of utilitarianism. The greatest good for the greatest number will result in different ethical rules in different societies past and present.
It’s not too hard to conceive of shifting meta-morals as well. You get a flavor of that in Christianity. It might be that our understanding of an unchanging G-d shifts over time. But G-d himself may evolve as well: the Old Testament God is more keen on hail and brimstone than the New one, according to some standard Christian interpretations.
Indeed, even if G-d is an entity outside of time, there’s no reason why He necessarily may not see it fit to apply a vector of morality across the sample, rather than have it an unchanging constant. Some of this might turn on the cost of creating a universe. If it’s easy, just keep re-running the algorithm keeping the parameters constant within each one. But if it’s expensive, then you might want to segment out the sample a little. Just be sure to get the project cleared by the committee on conscious subjects.
It means the same thing in ethics that it does in math: that every ethical proposition has a well-defined (truth) value. I.e., it would mean that there are no ethically ambiguous situations.
I agree, but without infinites, you don’t get a Goedel theorem. So, barring infinites, I doubt it’s possible to Goedelize ethics and prove that no ethical system is consistent.
My point is that Goedel didn’t prove math was inconsistent. He proved that it was either inconsistent or incomplete. Most people accept the incompleteness and cling to consistency. After all, if it’s inconsistent, it’s worthless.
Any formal ethical systems that are shown to be inconsistent would be pretty much dead in the water by now. I wouldn’t be surprised to find that some were proposed and rejected on this basis. But we can’t justify the hypothesis that any moral system is inconsistent.
Goedel’s theorem applies only to formal systems.
The real problem is that it’s hard to apply any formal system to real-world ethics. The easiest ethical system I know of to define is utilitarianism, in which the right choice is to maximize happiness for all people. As it turns out, it’s impossible to actually translate into reality (just one of many objections). For example, how do we balance misery of one against happiness of another? If we make a small number miserable to make a large number a good deal happier, does that really balance out? Even if we could measure happiness and misery, how would we compare the two? (A bigger objection is that maximizing total happiness in the short term doesn’t maximize it in the long run, so how do we factor time into the equation? But that’s beside the point here.)
Philosophically speaking, unstated axioms are an anathema. That is, if someone proposes an ethical system and it is shown to have unstated axioms, we either add the axioms or abandon the hypothesis.
In terms of “real world ethics” like those taught in business school, well, those are collections of good advice, and not really what philosophers would call “ethics”. It’s like the gap between metaphysics and epistemology versus science. There’s a big gap there. “Proofs” only happen in the philosophical plane. Experience shows what works and what doesn’t, but isn’t amenable to proof.
Meanwhile, people arguing practical ethics frequently try to show opponents that their beliefs are contradictory. It’s not surprising to find contradictions, as the moral sense that many of us base our ethics on need not be consistent. As I said above, it’s the outcome of an evolutionary experiment, and merely confers some survival advantage (or did, in ages past). Logical consistency is not required, especially for rare cases.
Just to clarify, there may be situations where we can’t calculate / prove the ethical result, but theory would tell us that it has a solution. This is what I mean by “unambiguous”. Not that we necessarily know, we just know that there is a unique solution.
And I do realize this is a pretty big tangent. But you asked whether we might be able to prove like Goedel that any possible system is inconsistent, and the answer is a clear “no”, because (a) Goedel didn’t even prove that for really powerful systems and (b) there’s nothing that says that an ethical system has to be anywhere near that powerful.
If some math geek can show that I’m wrong on the second part, I’d love to see it! I think I’m correct, but I’ve been wrong about simpler stuff.
What would be the equivalent of a Godel number or Turing machine tape in an ethical system? In those cases we can (theoretically) construct a complete set of statements within the system. I’m not sure how you can construct a compete set of moral statements.
We’d have to specify the domain of the moral system. That’s well defined in Godel and Turing, but I’m not sure what it is here. If you think about the implications of actions spreading far enough into the future, you may get effective infinities.
So one solution, if you accept the axioms I mentioned as part of an ethical system, is that it is not complete and the train problem is not a part of it. That might actually be a good answer.
And, to get back on the topic of this thread, wouldn’t a divine moral system be complete? Or, if you ask God what to do about the train, would he say “I dunno. Flip a coin.”
How many practical (in the sense that it produces answers to real situations) formal ethical systems are there?
I agree. The problem you mention is common to any optimization heuristic. It might not just be a problem with time, it might be a problem of inadequate information also. To get back to religion again, a deity should be able to see the global maxima. That’s part of the argument for omnibenevolence - we are seeing local maxima, and missing the global maxima. However there’s not a lot of evidence that religious morals provide anything like suggestions on how to get to the global maximum.
In formal mathematical systems, we want a very small set of axioms which can be demonstrated to be consistent. Id suspect that an inconsistency in an ethical system could be traced back to inconsistent axioms, one of which could be removed. The problem is, which one? If 50% of mathematicians said that x * 0 = 0 and the other 50% said that x * 0 = x, we’d be in a hell of a mess. (Let’s ignore the problems the latter axiom causes.)
Aside: I’ve never taken a business ethics class at a university, just the ones we are required to take online every couple of years. You are right about it being advice. Also threats. And many of the ethical principles described in the classes are flouted at high levels to make money, If some low level person gets in trouble the case might be used as an example. At high levels it is like it never happened.
I guess that is a clash between stated ethical axioms and unstated but more important ones.
I think I said Godel-like because I understand that it doesn’t apply directly. However I think the real answer to my questions is that any system based on a set of consistent axioms, even if complete, is not going to be interesting. Ethical dilemmas at their heart stem from inconsistent axioms or inconsistent sets of axioms. The examples are legion.
I’ve been thinking about responding to marshmallow’s suggestion (post #59) that god is “just another guy with an opinion” but this sums it up quite nicely.
Maybe he’d say, “Ok, let’s look at the tape… See that? That’s where Mr Engineer was supposed to check the brakes.” But of course, that raises the unanswerable question of why an omnibenevolent god would let bad things happen. Specifically, why do the passengers have to bear the consequences of the engineer’s malfeasance?
That argument makes the assumption that “God” is concerned with good or evil as humans define the terms, and that this god had something to do with designing humans, that even if intelligent intervention was involved it came from “God” and not something else, and that there aren’t perfectly reasonable evolutionary explanations as to why we’d have ideas about Good and Bad.
Or that God is either not good, or not perfect. That of course is a central problem with trying to base morality on what God supposedly wants; it simply* asserts *that God’s morality is a good one, and just pushes back the question of how you know whether or not a moral code is right one step.
A society running on the moral code of the Greek gods would certainly be an interesting one. Kind of like the Playboy mansion but no so subdued. And watch out for swans!
Not unreasonable. Tautological for certain versions of Christianity.
Entirely reasonable, albeit entirely tautological. But then we run into your first point.
…which were most likely not around when C.S. Lewis wrote. Sociobiology, aka evolutionary psychology arose during the 1970s.
Tradition.
Yes, much in these arguments have lost their traction today.
Moving on, we’ve established that atheist and theist morality both are reliant upon unprovable assumptions. The atheist is free to trace through the implications of his axioms, and make adjustments to them based on aesthetic judgment or what feels right. But so is the theist to certain extent: both can be philosophers though the latter may lean towards apologetics. Certainly large aspects of Christian tradition approves of moral intuition or sentiment; otherwise much of Paul’s imploring in his letters wouldn’t make sense and have no motivation.
The difference of course is that the traditionalist theist must accept in large part the bequested machinery of religious dogma, while the non-theist can be a free-thinker (as the phrase went a century ago). Two comments. 1) Some believers respond that without historically proven moral standards anything will go and we’ll rush towards an earthly hell or at least a Stalinistic one. So the religious machinery is a feature. I find this assertion to be convincing in a 19th century context, less so now. 2) But actually the modern Christian doesn’t have to accept the gospel’s claims of miracles, any more than Thomas Jefferson did. So for a Deist-leaning Christian I’m not seeing bright lines between his faith and that of atheist who espouses, say, equal treatment for all humans. As Voyager noted upthread, this is not an ancient idea: it wasn’t practiced in the days of the Pharaohs for example. It’s just an assumption, albeit one that marks you as not sociopathic in today’s world.