This is something I’ve given a great deal of thought to in recent months, I’m not really sure how this lines up with other perspectives, but here’s how I’ve broken it down.
We can theoretically model our decisions as a tree, with nodes depicting a given state in which one is faced with a moral decision and the edges representing the choice made that leads inevitably to the next moral decision. It’s not a perfect model, but it’s good enough that we can apply concepts of min-maxing like in a state based game. Thus likening it to a state-based game like chess, where the best move in chess is the move that best min-maxes one’s chances of winning, the most moral choice is the one that does the same for a particular moral goal.
Unlike a state based game, however, there is no single objective goal state, like checkmate in chess. Fortunately, that’s not really of much concern because even in a game like chess with a definitive goal state, it’s usually not possible, at least in interesting states during mid-game, to calculate a path from the current state to all the possible leaves and then determining the best path. Instead, we’re stuck looking a certain distance ahead with heuristics and estimating states that look good and calculating those paths instead. It’s these estimations and heuristics that approximate the moral guidelines that we put in place for ourselves that are ultimately derived from that end goal.
As such, our moral choices can be likened to the sorts of skill levels we see in chess masters. A novice player understands checkmate, but probably just assigns various values to pieces and doesn’t look very far ahead. A more advanced player will have more complex ways of evaluating states and look farther ahead. And a theoretical machine with sufficient processing time and memory can calculate an optimal path, an objective morality for a particular goal state.
So, really, I don’t think it’s all that interesting to discuss the rules themselves, as they necessarily derive from a chosen goal and our ability to both understand the consequences of our choices toward maximizing the approximation of that goal in future states and, thus, in coming up with effective estimations, rules, for achieving that. So, it’s tempting to put a goal state of something moral sounding like maximizing happiness or minimizing suffering or something along those lines, but those themselves are derived from our innate morality which evolved as a way of preserving our species and our society.
And the conclusion I’ve reached is that morality, like evolution, has no “goal”, per se, but we end up with seeming rules nonetheless, and the best goals I can estimate are ones either along the lines of survival, or more interestingly, freedom. Less restriction on the states, more options when faced with a moral choice, means that essentially regardless of what the “goal” is, or even if there is none, that will typically result in a better approximation.
So, our morality evolves not unlike we do. And this explains why morals and ethics continually change, why some things may have seemed fine at a given time in the past but are morally reprehensible today, and also how we can blaze new grounds in morality as technology and culture introduces us to new situations we’ve never seen before. It always seems to end up, in the long run, going toward greater freedoms, both in our choices and in our lives.