If you will allow me to digress here just a bit to follow you, I agree to an extent. That is why the idea of an absolute democracy seems foolish to me. A democracy wherein the simple majority rules in any and all decisions seems unecessarily dangerous. I have a preference for the Contitutional Republic. It allows us to define the principles under which the government should operate without giving that government too much power or too much of an opportunity to defy those principles. At the same time, it allows us to consider the widest freedom possible as one of those operating principles.
I’m not really sure that democracy is necessarily relativistic. I can see several arguments along the lines of maintaining freedom to act being made to defend democracy from an absolutist perspective. They fail to some degree unless the democracy in question is also limited to certain aspects of life. If any and all questions are open to a simple majority vote (the rule of the mob as it has been characterized), however there exist unecessary dangers to freedom.
Again, this is one of the problems I have always had with relativism. It seems perfectly valid under that theory to put any hairbrained scheme to a vote or poll the opinions of enough people to prove that it is, in fact, a perfectly valid way to proceed. We could get into a whole thing about the evils of the populist movement in America. As passionate as you are bout noting that relativism does not say that all moralities are good, I am just as passionate about breaking the myth that America is (or even should be) a Democracy. Just one of my pet peeves.
Ok, its my turn to bow out. I have to get some work done before I can enjoy my turkey. I’ll check in later tonight or maybe not until tommorrow afternoon. Thanks all for a very thought provoking thread! And Happy Thanksgiving!
Sure. The purpose of moral systems is to help guide the moral agent to make decisions which further the moral agent’s goals. This is a general statement. Note that the moral agent need not have survival or even improving his/her life as a goal.
Even if you’re using an expanded version of living, I see it as still too limiting. Survival, or even the improvement of life in general, need not be a goal of a given moral agent.
I didn’t specify rich, and I am not suspecting anything. I made a statement. Consider a happy and productive dictator vs an unhappy and unsuccessful saint. It doesn’t matter how likely you think this is. Those are the conditions of the thought experiment that I proposed.
But any given moral agent is a single entity. You stated that the suitability of a moral system was based on how a moral agent lives a life on earth. Bringing in humans in general is an additional requirement. In other words, you’ve moved the goal post from what you originally stated.
You’re being inconsistent. Immediately above you said that you were talking about the life of humans in general. Here, you say that we haven’t focused enough on the life of the individual practitioner. Which is it?
I didn’t say that a moral system requires that a moral agent no longer be a moral agent. All I said was that how long a moral agent remains a moral agent isn’t relevant. I’ll state my definition of the purpose of morality again: A moral system’s purpose is to guide a moral agent in making decisions such that the moral agent reaches his/her goals. Whatever those goals may be.
As far as I’m concerned, selfishness is perfectly acceptable. I have nothing against it, and indeed, I do see it as a virtue in many ways. However, from where I stand, you are letting your evaluation of selfishness as a virtue color your analysis. Because your moral system values selfishness, you are assuming that such selfishness ought to be part of the yardstick for measuring moral systems.
I disagree. Humans need to take actions for many reasons. Some humans need to take action to end their lives. Some humans need to take action to sustain the lives of other humans at the expense of their own.
Already done. But I’ll state it again: Sustaining life does not need to be a goal of a moral system.
Let me come at this in a different direction. A moral agent needs to be alive to make a moral choice. A moral agent needs to be able to choose to make a moral choice. These statements I agree with. However, I do not agree that a moral system needs to perpetuate these factors in order to be a moral system. A moral system needs to provide a method for the moral agent to reach his or her goals. You are assuming what those goals are.
What you see as a failing I see as a strength. Given my definition of the purpose of morality, I think you ought to be able to see why.
Just to clarify, I did not say that humans “need” fairness. I observed that humans strive for fairness, similar to observing that humans breath and make choices. I also did not say that humans need or want to survive. I said that humans survive because they strive for fairness (and because they breathe). In other words, if humans did not strive for fairness, they would not survive (unless, maybe, they became asexual and/or immortal, in which case they wouldn’t be human).
I am not saying that survival is better than non-survival in any absolute sense. I am observing humans. Humans are present to be observed because, since their “arrival”, they have survived. They have survived because they have made choices that enhance the chances for survival. The system they use to make choices is a morality. They have many moralities. When selecting between moralities, they choose the one that is more “fair”. They are not omniscient, and since different ones work on different “facts”, different ones may rate the same “morality” as having a different amount of fairness.
I agree. Freedom of choice is important because opportunities to choose are scarce. We got side-tracked on the kangaroo bit, and ignored that since I don’t have the option to choose to be a kangaroo, options are not infinite, and since my exercise of an option limits your choice, options are scarce. When one opportunity to choose exists and there are two potential choosers, a dispute resolution is required. If we cannot resolve the dispute as to whether to grow wheat or potatoes, we will have no food and we will die.
I think that the heart of the whole debate is right here. If there is a potato and me, and I am hungry, I can choose to eat the potato. There is one opportunity and one potential chooser and I can make a decision based on an absolute morality. However, if there are two potential choosers, there is conflict. There is one potato (and it can’t be split) and you and I are both hungry. The fairness requirement provides a way to resolve an unresolvable dilemma. Even if we don’t admit it, we both recognize that there are two relative moralities in play and that we can find no objective way to prefer one. You are not more entitled to make this choice than I am.
If God splits the potato in two, suddenly there are 2 choosers and 2 opportunities. We recognize that each chooser has half of the moral authority required to make the choice. In essence–and descriptions are getting tough, here–I trade my half of opportunity 1 for your half of opportunity 2. I will be the absolute for opportunity 1, you for opportunity 2, and the only absolute morality is that of “fairness” or “even trade”.
And we all answer to it. If you ask a dictator by what moral authority he dictates, he will try to craft an argument based on fairness. “If it weren’t for me, these people would fight all the time and not get any work done. They would die younger and have fewer opportunities to choose. They are trading opportunity to choose fighting to me in exchange for the opportunity to choose food.” Or some such.
The problem is when we start trying to rank opportunities to choose. Backers of universal healthcare start trying to rank one person’s choice to buy healthcare higher than another person’s choice to buy ice cream. In other words, they are trying to justify taking an opportunity without a “fair” exchange by using their personal “absolute” morality–that is, they are trying to corner the market on “fair”, and their arguments fall apart when you make them acknowledge that person 1 cannot “measure” person 2’s opportunities using person 1’s value system. They cannot win without admitting an “objectively” “immoral”, i.e. unfair, act.
Rhetorical question: Have you ever made a choice that you though was truly unfair (in the broadest sense you can imagine as a human) and moral at the same time? I would suggest that you have not. Not because being “fair” is objectively “right”, but because to be human is to seek fairness. If this were not so, there would be no “continuing existence” of humans, right or wrong.
it turns out that my folks started getting high speed internet access again, so i might just post more while i’m here.
i haven’t had the time to read all the new responses since last night, so forgive me and ignore me if you have already answered my concerns.
in order to give you a satisfactory answer to your question, i think i must require you to tell me what you think makes a moral system suitable. if you have specific tasks in mind, such as “allowing humans to function as humans” (or perhaps something more meaningful and specific than that–continuation of the species, perhaps), then i can easily create meaningful systems that don’t perform well at that task. if you mean suitable in that it allows them to adequately distinguish right from wrong, i don’t think i can come up with a meaningful example, but i can note that “knowing right from wrong” is itself subject to the relativistic critique.
This is precisely where moral relativism fails IMHO. I understand the general principle of not ascribing any characteristics to “moral agents”, but I don’t think such a definition exists in nature (or can exist for that matter). Imagine if you will a moral agent without goals. This is perfectly acceptable under your formulation. But can you describe such a thing to me? Can you tell me what it means even?
Possibly, but not if the moral agents are limited to humans living n earth.
No, I haven’t. We are talking about an absolute system. That means it has to apply equally to all humans in the same way. Note the verbiage about non ambiguity between me and erislover.
Both. This may sound weird, but I mean the individual in general. That is, a rule which applies equally to each particular individual.
Right. But a living entity has, at least to some extent, the built in goal of staying alive. I’m not claiming that this is the only goal, I’m not even married to the idea that it has to be the most important. But I have no idea what it means for a living being to have no goal whatsoever reagarding continuing to be a living being. That simply does not make sense.
Not at all. You brought up selfishness. I simply responded with my opinion in that regard. I have never said that selfishness needs to be anywhere in the absolute moral system at all.
Yes, but not in a vacuum. Those that need to take actions to end their lives need to do so because the quality of their lives has so deteriorated that it is no longer the life they want to live. That is, it is no longer thier life. Those that need to take action to end their lives for others need to do so for other whom they value. That is others who are important to their lives. You cannot remove the goals of the individual human in question from the equation. Just because a person chooses to end his life does not mean that his “life” is not important to him. It certainly does not mean that his morality can ignore it.
But this is only true for some kind of bizarre unliving moral actor. Life is a series of actions. That is the quality of a moral actor IMHO which requires that they have a morality.
No, I am not assuming them, I am proposing that the nature of the moral agent demands them. At least one or two of them. Again, I am not trying to define a complete absolute moral system. I am merely suggesting that unless the moral agent has no inherent goals whatsoever, then those goals which are inherent suggest an absolute way to rank moral systems.
Yes, I do. Do you understand why I do not see your definition of morality as applying to human beings living on earth?
As I said before, if you remove all qualities from the moral agents then you can postulate any thing at all as a perfectly acceptable moral system. A string of meaningless nonsense characters qualifies. Even if you only include the one character of “makes moral decisions” you can get all sorts of moral systems which do not apply to human beings. We might want to pat ourselves on the back and say that we have discovered a truth about morality divorced from our limited perspective. But I think instead, we have simply removed meaning from the terms morality and moral agents.
But again, the survival of each individual is the reason they choose to cooperate. Cooperation, or conflict resolution is more condusive to survival of each than a resort to force. Again, “fairness” moves down the hierarchy of morals from individual survival.
You said it yourself this way:
In other words, fairness is a means to the end “survival”.
I took just this bit out.
Allow me to suggest that you have proposed an absolute framework to judge that portion of an individual’s morality dealing with the treatment of other people. As I said above, I think that the moral system in toto needs other items above this part. But I think your formulation may be another way to attack the moral relativist position.
I am working on a definition for “living life on earth” now. I will try to post it tomorrow afternoon.
In the meantime,
Ok, please explain this then. I have suggested in the past that if we expand the definition of moral agent to “anything which can choose based on a morality” then purhaps moral relativism has a point. But even then, we have adopted a built in task, an inherent goal for the moral agent. Imagine a moral system which does not permit the moral agent to choose any action from any other. I think I proposed such a system in a previous post. Allow me to repost it only because it if so fun for me.
hjfiopanfoweancjowph juncawo;cfn aweui9fn awefn aweilbf iawfn wen fupwn io awen9pfbcv aeriu pn fonuip wernof neui bi binfuo nui9pernu9pcvn ofn ion fu9werfuipwer nu9 nfui pui nuio up
Or something to that effect. Under moral relativism there is no way to devise a frame work which is preferable to any other frame work which disqualifies this as a useable moral system. But if that is the case, then a moral agent employing even this system has to have some meaning. What is that meaning? What can it possibly mean for a moral agent to employ the meaningless string of characters above as his moral system?
Keep in mind that for the purposes of this discussion I have relaxed entirely my previous restriction on moral agents. They don’t have to be alive, the don’t have to be on earth, or any other characteristic except that they have to be able to use the above moral system to make choices.
I am stumped entirely as to how that last sentence means anything at all.
note: i am not entirely caught up on this thread, so it is possible that i am rehashing things that have already been touched upon.
i think the problem with this approach, and a problem pervert will eventually encounter, is that it seems to require that there is some purpose to reality itself. if morality has a purpose where reality doesn’t (in the context of what you state here, and pervert agreed with), it is merely by definition, and that definition would have to be considered subjective.
now, just for kicks, let me propose something that just occurred to me, and let’s see if anyone finds it useful:
proposition a: a moral agent can not possibly (and non-arbitrarily) choose an option that he knowingly prefers less than another possible option.
proposition b: a moral agent’s happiness (as in, the degree to which his preferences in the outcomes of his decisions has been satisfied) can be objectively quantified (on a neurological basis, perhaps).
i feel one might be able to disagree with either of these propositions, and i welcome anyone’s challenges to them, though i admit at the moment (and it is 4 am) i can’t see any holes for myself.
now, i think the problem with pervert’s definition of “morality”:
suffers from the undefinable nature of “right” and “wrong” without a subjective system of assessment. given the propositions i enumerated above, if we take them as fact, we can give these terms some objective meaning. that is, if happiness is quantifiable, and a moral agent can not possibly (non-arbitrarily) choose an action that would make him less happy (and arbitrary decisions could not really be subject to moral scrutiny), i have created an objective definition of “right” and therefore “wrong”. i believe, if we assume the moral agent has all the relevant information, that this could satisfy the requirements for an absolute moral system.
i would note this is a very ironic approach to finding an absolute system, as its objectivity is based wholly on subjectivity. however, if we recognize the subjective nature of decision-making as fact, we can consider it objective. what says our apparently ever-growing (welcome all the new voices!) crew to this?
I can see where that would possibly be acceptable under what I laid out. I would not contend that such a thing really exists, however. I can’t describe it to you, because I’ve never encountered it. I would not object to saying that a moral agent must have goals of some kind. The reason I prefer the more general definition is that I cannot know, a priori, what the goals of a given moral agent are. That’s why I object to the idea of picking a particular yardstick, whether it is survival, life improvement, freedom of choice, wealth, or anything else. Without knowing what the goals of the moral agent are, there doesn’t seem to me to be a way to evaluate a given moral system that would be applicable to the moral agent.
I believe that humans are different enough in their motivations at different times that no single rule will apply to all of them at all times.
You can’t conceive of a human consciously giving up the goal of continuing to live? What about martyrs? Humans who pursue a goal that is actively improved by the death of the moral agent? What about soldiers who will accept death to achieve their goals? If you’re going to say that these people still have “life” as a goal, then your definition of life is far too broad for me.
If your definition of life is so broad that it includes everything from increased longevity through improving one’s fortunes through helping family through making a political point through suicide, then I think you’re never going to have a standard that will result in a meaningful answer. Moral systems that are suited for making a political point through suicide are diametrically opposed in at least one major particular to moral systems that are suited for increased longevity.
Well, I guess this is where we will have to disagree. I feel that assuming all moral agents, even all human moral agents, will have a given goal is a mistake.
I did not say that. Don’t make me jump up and down and stomp my feet. I said that the “inherent” striving for fairness is the reason that humans, collectively, survive. Individuals do not survive. None of them.
That has gotten me thinking, though. Is there something that moral agents inherently strive for by nature of being moral agents? See if this makes any sense to you: Moral agents strive for opportunities to make choices–which is why sometimes they choose against survival. Fairness is a means to the end of negotiating opportunities to choose with other moral agents.
No, survival of humans collectively is an “unintended” result of the moral agents’ pursuit of fairness, or something like that.
Actually, what I’m thinking is that it is more about dealing with “scarce” opportunities. There are virtually NO choices that do not impact other potential opportunities, either for the moral agent or for other moral agents. If the choice only affects the chooser, the moral agent will go by its unique “absolute” morality and seek to maximize future opportunities to choose. If the choice affects more than one moral agent, then the dispute resolution kicks in.
I THOUGHT I was pursuing the line of thought that you started, focusing on objective qualities of the agent. I keep hoping you’re going to jump in here, because I’m starting to confuse myself.
How about the proposition that all moral agents prefer to have as many opportunities to make choices as possible; that is, they prefer preferring. Even though they may be mistaken in their analysis, they will always choose to “prefer” the choice that seems to give them more subsequent chances to do more choosing.
Which I guess could be described as, they want control.
Fairness kicks in when dealing with other moral agents. If you choose in a way that is unfair, other moral agents will band together to limit your future choices.
The needs of people vary wildly; the systems that are most functional for my needs are clearly not the ones that are most functional for your needs.
I can rank what suits me best entirely nonarbitrarily; however, the choice of me as a ranker is pretty arbitrary (for situations in which I am not personally involved).
I cut to the chase:
A logical justification for a system is only as good as its axioms.
Axioms are, by nature and by definition, not provable.
Your absolute standard only functions for those people who share your axioms; for those people who do not share those axioms, it is circular and backed up only by your assertion that your axioms are true.
You’ve demonstrated this axiom lock repeatedly in this thread – I’m most aware of it in your discussion with Teine, as I’ve been discussing this thread with him off-board – and you will not find a logical means of convincing people that your axioms are superior, any more than mathematicians were capable of proving the Parallel Postulate using the other axioms.
Different axioms will lead to different moral geometries.
As I said earlier, the only one that I can see is “You gotta make choices”. I hold that those choices not only include choices of action and inaction, but of when to adopt modifications to the system used to make those evaluations in general to better suit my needs.
i think your formulation might be encompassed by mine. that is, if an agent prefers to create choices or prefers fairness, they do so because it will increase their overall happiness. my propositions are more or less an attempt to define preference, or at least quantify it. i’m saying that an agent will prefer something iff it will increase his happiness, and that that happiness can be measured.
No dispute; I think I’m trying to carry it a little farther. That is, each moral agent’s “happiness” is directly tied to that agent’s understanding of what will provide the most number of opportunities to be the chooser. In most cases, this would be choices that increase the chances of survival and/or produce healthy effects in the, in this case, human body being occupied.
Unfortunately, I don’t think I’m deeply grounded enough in the philosophical underpinnings of the argument in general to carry it too far or to know when I’ve completely wandered off the map. That’s why I’m kind of hoping someone here will either slap it down or try to make it work in a more rigorous way.
As an aside, one of the other posts referred to soldiers and got me thinking. It seems that moral agents may even try to “make choices” after their existence ends. An example would be a will. A person leaving a will is trying to make choices in advance of the time when he/she will be able to make them personally. And people trying to get to heaven are really trying to extend the choice-making horizon.
For all I know, I’ve completely wandered out of the appropriate territory, but I’m finding this line of thought really interesting as a way to toy with other ideas. Like this for instance: As humans, we can’t ever “know” anyone’s moral framework but our own. Often, though, we try to guess at others’ frameworks using empathy. If we posit a God and we posit omiscience–or something close to it–to that God, then a distinguishing feature of that God would be the ability to actually know and understand ALL of our moral frameworks. If we further posit that God had a purpose in causing all of this reality to be, the only way we could guess at God’s purpose would be to try to guess at the thoughts of God.
In other words, if there is a God, and if we would know God’s purpose, we must practice empathy. In fact, the practice of empathy would be the most productive activity to engage in if we wanted to be “close to God”.
I know there are probably lots of places in there where someone can go hog-wild shooting it down. Still, I think it is an interesting line of reasoning. And I like the result, because I am becoming increasingly convinced that empathy is what we need more of, and desire to control each other is what we need less of.
Anyway, food for some pretty interesting thought, no?
No, I don’t think so. I think most of us agree that reality does not have a purpose. It simply is. However, some things in reality act in purpose (or goal) driven ways. That is, some things in reality have a purpose, but the purpose does not go beyond themselves. They are an end in and of themselves.
This is, if I may expoung my opinion a bit, the moral agent. A human life is not the means to some other end. It is an end in itself.
I’m not sure how a definition can be considered subjective in the relativist sense. Are you saying that we could simply redefine “morality” to mean something which does not have any purpose whatsoever and that this new definition would not be preferable over the standard one? Doesn’t this mean that all words are up for grabs?
I’m not exactly sure what you mean by “non-arbitrarily” in this sentence. Are you saying that a moral agent will not choose a lesser value? Or do you mean the he is incapable of doing so?
Again, I’m not exactly sure I am following you, but are you proposing that the happiness of a moral agent is entirely derived from the fact that his decisions were carried out? If so, this seems like a very narrow definition of happiness. Some of the happiest moments of my life have been when I was denied the outcome I strived for. Also, I should note again, that you may be ignoring more simple and essential characteristics of a moral agent. How happy can a moral agent be if he is dead? Yet, if he decides to eat poison instead of food, then his decision bein carried out, he should be very happy.
I think you are basically formulating a moral system based on the idea that whatever makes the moral agent happy is good and everything else is bad.
I’m not convinced that happiness qualifies as sufficiently objective and unique for the requirements that erislover laid out. I’m not entirely sure you are on the wrong track, but I’m not sure you’ve found it either.
I understand and sympathize with this entirely. I agree entirely that if there are no goals which humans have in common we have no framework (to use the relativist term) from which to evaluate moral systems. However, I do not think it is true that we cannot construct some set of goals which are common to all humans. As I have said a couple times, if you expand the definition of moral agent more widely (including aliens, robots, or even rocks for example) then I agree that moral relativist applies.
However, I do not think that such a situation occurs in reality. I’m going to go into more detail with a response to Lilairen below. Suffice it to say that I think there are indeed certain goals that each and every human has in common.
I disagree. Even if there is no way to formulate a specific rule that applies to all humans (everyone must wear hats, for instance) there are certainly generalizations which do apply to everyone. We are all animals of a certain species, after all. That means that at the very least we have certain physical abilities and limitations in common.
No, I did not say that. I said I cannot concieve of a human living his life with the goal of ending his life as the highest good he can think of. I have no problem with martyrs or soldiers. It simply seems self contradictory to have a method for choosing right and wrong which precludes him from choosing at all.
Well, yes, perhaps, but not according to relativism. Moral relativism states that there is no way to prefer one system over the other except by adopting one of them.
Doesn’t that depend on the goal? After all, our similarities are far greater than our differences.
But “humans” don’t survive unless individuals do as well. I agree that individual humans don’t survive for all time. But then, neither do groups. We are a fairly young species, after all.
Again, I quite agree that fairness is useful simplification of the sort of moral system we are looking for as far as interaction with other moral agents goes. However, I still feel that it is not sufficient for the highest of highs in such a moral system. If fairness is the only measure of moral systems, then a person cut off from all human contact has no need of moral systems.
Yes, I got the feeling that this was the fomrulation you were striving for. I think this is backwards, however. What does it mean to seek fairness alone on a deserted island? Do morals only apply to the interactions of moral agents? Or do moral agents use their morality to decide on actions that only apply to them?
Yes, but I think that unique (in the sense that you mean it) and absolute are contradictory here. Additionally, even if you only use “unique”, that morality has to be the framework from which the various “dispute resolution” methods can be evaluated. Morality is a method for moral agents to make choices. It is not a method (exclusively) for moral agents to resolve conflicts. Now, since resolutions to conflicts require choices (by any involved party) morality naturally plays a role. but morality itself is a much broader concept than this.
I agree. You were. As I said, I quite agree that fairness is a useful simplification for that aspect of moarlity dealing with other moral agents. But I don’t think it is sufficient for defining a moral framework.
I think it would be much easier to defend the inclusion of “fairness” in an absolute morality based on the survival of individual agents rather than the reverse.
Yes, indeed. This is an interesting formulation. It is essentially the same on taken in the article I linked to the other day. I find this intriquing, and am not sure there is not a hole in it. I think we could derive a desire to survive from a desire to continue being a moral (choice making) agent. I also think we could derive the concept of fairness from it.
Is there some thorn in your paw I refused to pull out?
I have to disagree with this entirely. Remember please that we are both human beings. We both have a need to breath, drink, eat, sleep, and a whole host of other physical needs. We also both have a need to think. Thinking is the only faculty we have wich allows us to meet those other physical needs. The need to think comes with a whole host of other phycological needs I don’t think I have to ellaborate on. However, one of them could be called, perhaps, the need for contentedness. Ramanujan might prefer to call it happiness, but I think contentedness describes it better. It is the need to “feel” that all is right with the world. It is the best description I have for what I feel is a very common human spiritual need.
So, we have physical, mental, and even spiritual needs that every human has. I agree that many humans will satisfy these needs in very different ways. This is why I have said all along that if we go too far down into the moral hierarchy, we will get bogged down in the minutia of whether or not to wear hats. But we can agree, I think, that we all have heads. And that keeping them attached and healthy is a common goal for all humans. Even those who do not recognize it as such a goal.
This is the basic argument that nothing is provable. It is quite easily dispensed with thusly:
Very well, I do not accept these axioms. Prove them.
Again, with the veiled insults of some sort. I have to ask again if there is some way that I have offended you? I truly have never meant to do so. Did I ever say something (in another thread, perhaps) that offended you?
Goodness gratious. I am most certainly not trying to prove that “my axioms” are superior any more than I am trying to prove that there is an absolutely correct religion. Read my posts again. I am always careful to couch my suggestions of any part of such an absolute morality is a way that indicates I am not profering it as such. For instance, I have talked quite a bit about survival as one way to evaluate moralities. I have never, on the other hand suggested that the survival of the moral agent is the highest of the highs in an absolute moral system. I have hinted that it needs to be near the top. And I have hinted that I think it can be used to differentiate in an absolute way between moral systems which place it at the bottom. Or perhaps those which place its opposite at the top. But I have never said that it is the principle on which all others much be based.
Quite. And some of those different moral geometries may not be suitable for humans living on earth. Go back and look at the argument I made to Teine that such an axiom includes strings of nonsense characters as moral systems. I am not disagreeing with this axiom. I am simply pointing out that there is a certain set of axioms which may indeed apply to all human beings. I am not proposing any of them. I have laid out a couple possibilities. I am more interested (in this thread) with demolishing the moral relativist positiion that no such axioms can or do exist.
I believe that you are correct. The purpose of a morality is that we (humans in particular, but moral agents in general) have to make choices. This may also be a slight ray of hope. Could you sign on to an absolute method of evaluating moral systems on the basis of thier power to make choices? Can we at least elliminate the sequences of nonsens strings absolutely?
I think that if we can, then the most agregious aspects of moral relativism are abolished.