Is morality a human construct?


Your right, I did not. I was invoking a self referencital aspect of the ideas we are talking about. Let me answer some of your other question, and then I will sum up.

I agree that some would argue they are not basic components of reality. But I would argue that to deny abstractions as basic components of reality is to deny sentient beings as basic existants. I’m not sure how you do this from your current position. I agree that we can talk about “all reality except for sentient beings”, but then we could also talk about all reality except for stars, or galaxies, or anything else.

I understood.

I agree that consciousness making a choice is better. What I was trying to avoid was a formulations which alowed choice to be some thing seperate from conciousness. I did not think you were trying to do that, so I called it a nitpick.

Ah, but morality is a property that applies to conscious agents as well. In fact, it is a property which applies to concious agents making choices. Purpose is probably too anthropomorphic a word. I did not mean to imply that morality had desires. I meant that the concious agents which employed morals did so because of a need, or to fulfill a purpose. That need, that purpose, is to have a system with wich to judge the value of choices at hand in the context of life as a whole. Specifically, given a particular choice in a particular moment, a conscious entity needs a method of determining how that choice fits in with its long term values. My contention is that morality serves this purpose.

That is correct. One could claim any particular set of values as a moral system. The idea I was trying to suggest is that they are not all equally valid. That is, morals serve a purpose. Any particular set of morals can be judged as serving that purpose well or not.

Well, be careful. My formulations are crude, but I did not mean to suggest that your life, for instance, could be put in a higher place than mine in my moral system. I hope this is not what you meant by absolute good. What I mean by placing the life of the moral actor at or near the top of any moral system is that without that moral actor the moral system is meaningless. If we postulate a moral hierarchy of values for Tom, for instance, what does it mean to not include Tom’s life near the top of such a hierarchy? What I am saying is that any set of morals (which Tom needs to put choices in the context of his life as a whole) which do not include his life near the top will necessarily not serve Tom’s purpose well, and thus will not be realistic.

Agreed. Allow me to sum up.

Morality does not exist in reality in the same way as electrons do. It is not a characteristic of any inanimate material (excepting that sentience is). A particular morality is a hierarchical set of values for guiding the choices made by particular conscious being. As such, its purpose is to serve as a guide for making choices. I realize this is somewhat circular, but I am ignoring aspects of consciousness such as the lack of omniscience. Unless I am much mistaken about what we mean by “universal morals” the term means some moral which can be said to exist in all rational conscious beings. You are correct to raise the issue of certainty. I am proposing that the essential nature of the existants we are talking about provide this.

  • A rational being must have a tool for making choices in order to make choices.
  • Morality in general provides such a tool.
  • Specific moral systems can be judged on the basis of how well they serve this purpose.

I am proposing that the value of an individuals life amounts to the ultimate value which must be held by that individual for his moral system to be considered to serve this purpose well. The conscious being cannot be said to make choices if he is no longer alive.

How did I do?

Did I really post exxential? I meant existential. And I meant it in the sense that abstractions which only apply to sentient beings does in fact apply to aspects of reality.


Yes, but would you be so kind, please, as to point me to the one (or several) that you feel explicitly make the case for explaining the origin (as opposed to the propogation) of morality.

I will focus upon whatever you point me to that defends your assertion. In your last post, you mentioned game theory and evolution. You haven’t specifically mentioned it, but your discussion obviously references theories of complex adaptive systems as well. Again, I am open to whatever interdisciplinary approach you wish to bring to the table to support a theory for the origin of morality, I’m just waiting for you to present the actual argument.

Okay. Tell me what tools, exactly, one needs to synthesize in order to probe the origin of morality? An emergent phenomena in a cas is nothing more than a descriptive label for aggregate behavior that is determined by the lower-level behavior of the agents (or meta-agents aggregated from a still lower level). Saying “morality is an emergent phenomena” is well and good, but it doesn’t really answer, or even necessarily ask, any of the interesting questions. Now, if you can show me exactly what agents and flows and mechanisms etc. that go inot the model from which morality emerges, then we might have some very interesting things to talk about.

Certainly, though one should be careful in the introduction of non-deterministic variables if the agent’s rule set is designed to describe a real-world element operating above the quantum scale.

Sure. Again, this tells us very little about the origin of altruism. It tells us only whether altruism confers a competetive advantage in a given system. (And even then is limited in scope according to the balance of other behaviors modeled within the system.)

Of course not. Emergent phenomena are by definition the result of the selective processes of an adaptive system.

The real question (in the context of this thread, at least) is whether morality can be demonstrated to emerge from a system whose design is sufficiently close to a “primordially conscious world” that we have confidence that the model is telling us something significant. Even then we would be wise to remember that the rules of a cas are simply convenient ways to describe the behavior of the agents. There is no implication that we would find a similar “rule” if we were to “dissect” the real world element whose behavior we are modeling.

Not precisely. We could argue that societies, cells, and humans behavior can be modeled by the same class of systems (or alternatively that they can be described fully by referencing the same set of properties), but rule sets are specific to each cas.

The key word in your sentence, though, is if.

Oh, and I might pick a nit that behavior at any particular “agent level” of description is basic. What emerges is always an aggregate of behaviors that we consider similar enough to feel comfortable describing with a single label. It is true, though, that any particular agent-level might itself be an aggregate of some lower-level behavior.

So, you are prepared to assert that evolutionary fitness is the only possible moral standard?

I am a stronger advocate of science than many, but even I am not prepared to declare that science is the only worthwile method to apply to any question. I’m afraid I simply cannot see the justification in throwing away millenia of human thoughts on aesthetics, metaphysics, epistemology, logic, theology, etc. simply because the studies happen to fall outside the realm of science. If it’s not scientific, there’s no hope to study it scientifically.

Well, that is a question of ethics I would think. Of course, how your scientist supposed to answer such a question is a bit of a mystery.

For myself, I think that a human being should seek truth in any manner that seems promising.

It’s been a long day. I’ll have to answer you tomorrow, I think.

Tell you what: I picked a bad page, in that it doesn’t reference some of the game theory stuff I’ve read about. Instead like the reference “The Evolutionary Origins of Morality: Cross-Disciplinary Perspectives” clearly states, there’s an interdiscipliary approach to be taken to, well, the origins of morality.

Having said all that, I’ve been bored of Huxley for 15 years. You should be familiar with Pinker, who is referenced on the page, and who I like a whole lot. You’ll also know Chomsky, I’m sure, who Pinker helps to deconvolute somewhat. Frankly, I love Chomsky, but he gives me a headache; not for mere mortals such as I. Again, where’s the fun in all that. Allhave discussed the pros and cons to these arguments, and some a long while ago at that. This is not a new argument.

But I like new information. So here’s an idea…

Let’s both agree to buy Evolutionary Origins of Morality, and then discuss it. I going to buy it anyway, because this discussion has gotten me in the mood. I cannot find a more concise title that addresses your specific arguments. There is even an entire chapter on “Game Theory, Rationality and Evolution of the Social Contract”.

I think this book may contain about as much as my poor brain could handle between two covers, and is about as comprehensive a compendium as I can dig up. Thirty bucks; I’ll do it. If you demure, I’ll understand, because it’s certainly not free. But for me personally, I should put my money where my mouth is, and learn or re-learn as much as I can on the subject.

I’ll be back!

The book looks quite intersting, but I don’t know that I can commit to buy it right now. I have added it to my “wish list”, though, and I will consider moving it to the very top. I have some business travel coming later this month that might provide me the opportunity to read it (or at least a good chunk of it.) As an aside, I have never read Pinker, and I know Chomsky primarily from his work in linguistics and his political activism. Have you read Holland? If not, you should check him out. His work on complex systems is fascinating.

I do have a concern, though, that perhaps we are talking at cross-purposses, here. I have stated that in my evaluation the elements upon which a moral valuation “acts” is “a consciousness making a choice”. Now, that clearly implies consciousness as a meaningful referent. So, for me, any treatment of the origin of consciousness that attempts to derive it as an aggregate from “pre-human” evolutionary forms must meet one of 2 criteria:[ul]
[li]Establish or postulate that the precusrors under consideration were conscious.[/li][li]Establish the origin of consciousness somewhere in the chain of development.[/ul][/li]Now, I personally have some sympathy for the idea that morality was, in fact, “isntantiated” in the instant that an agent made its first conscious decision, an idea that might flow well from one of the essays in the book you found for us.[sup]1[/sup] However, all that really does is turn the question “how did morality originate?” into the question “how did consciousness originate?” That’s a fun question, too, of course. But I keep getting this feeling that you view the context of this question quite differently from myself.

Do you, in fact, accept the existence of consciousness as a meaningful semantic referent? You have not, I think, plainly taken a position on either side of that question, but some of your earlier comments make me suspect that you do not asscribe a semantically meaningful (or existent, if you prefer) referent for consciousness. If that is the case, then I think we will spin our wheels fruitlessly in discussing the origins of a phenomenon for which we apply contradictory definitions.

So, let me ask you plainly these 2 questions:[ul]
[li]Do you believe consciousness exists and has an influence (not necessarily deterministic) on the behavior of the conscious agent?[/li][li]What is your definition of morality?[/ul][/li]
[sup]1[/sup][sub]There’s also the possiblity that the essays take no account of consciousness at all, restricting their analysis to behaviorla models that gain no complexity in the lowest level [though I am sure they develop heirarchical complexities] as the system progresses “up” the evolutionary ladder. In that case, I would find myself having to say, rather disappontingly, “what is modeled is not humanity.”[/sub]

I think we are in accord about cityboy’s position. I will let him drive the debate on where we draw the “abstract line” to delineate basic elements of reality. (If he chooses to do so, of course. :wink:

Yes, but the relationship is not transitive. [ul][li] Morality is a property of conscious agents. [/li][li]Purpose is a property of conscious agents. [/li]BUT
[li]Morality is not a property of purpose.[/li][li]Purpose is not a property of morality.[/ul][/li](I apologize if drawing it out like that is too pedantic. I just find it helpful to establish such points with as much clarity as I can muster.)

Okay, but I still am concerned about a couple of things. Originally, you said, “What we need is one more step along the path of identifying moral actors . . . That step is the purpose of morality.” From that it appeared you were pointing to another element present in the moral valuation itself. From your more recent post, though, you appear to be referencing some functional description of morality as an element of consciousness. I am not certain I understand the appropriate context for the concept you are trying to convey, so I fear I have little hope in grasping the concept accurately. Can you clarify for me the sense in which you want purpose? (or a less anthropomorphic word if you prefer–I am happy enough to adapt to your usage once I understand it.)

Second, if it is indeed a functional description that you are pointing to, can you tell me what relationship, if any, you postulate between a moral valuation and the functional description of how/why a conscious agent enacts a moral valuation?

No worries. I was not going down that path. My objection, quite simply, is that I do not know why I should place a particularly high value upon my own continued existence as a moral agent, unless (as loopydude seems to argue) continuation itself is asserted as the final arbiter of morality. To give just a few alternatives:[ul]
[li]I can place the continuation of 3 brothers, 5 uncles, 9 cousins, etc. above my own (genetic altruism)[/li][li]I can place the continuation of 2 strangers with capacity for good as great as my own above my own continuation. (utilitarianism)[/li][li]I can place cessation of my suffering above my own continuation. (ethical suicide)(several branches of bhuddism)[/li][li]etc.[/ul][/li]

Only if we assert that every moral system is tied inextricably to the individual conscious agent and cannot be meaningfully communicated to any other conscious agent and is meaningless once it is no longer actively practiced. If those conditions don’t hold, then all Tom’s extinction means is that Tom no longer practices that moral system.

For myself, I agree with the first assertion but not with the second or third. Since you are arguing for the existence of at least some universal moral principles, I have to think that you would deny the first (at least).

Well, here we get back to my questions about “purpose” as a functional description (or not). I will wait to address this fully until after I gain a better understanding of your position. For now, I will simply repeat that I see no reason to necessarily make Tom’s continuation the ultimate arbitrator of Tom’s morality.

I don’t think you are mistaken, but I think the term can be used in a number of ways.[ul]
[li]As a descriptive (and as you described), a moral valuation active for all moral agents[/li][li]As an imperative, a moral valuation that should be active for all moral agents. (where should can be invked in a number of ways.)[/li][li]As an external absolute, a moral valuation that exists external to teh agent and against which the agent’s valuation can/will be examined for conformity.[/li][li]As a utilitarian ideal, a moral valuation that would result in a greater good if it were to be active for all agents.[/ul][/li]There are probably more, or at least variations on the above. The descriptive sense is pretty common, though, and I am happy enough to settle on it as the one appropriate for our discussion.

  • Well, so long as we accept a very loose standard for “tool”.
  • I agree, so long as we understand it is not the only such tool.
  • I don’t see how. The only need established above is the ned to make choices. I suppose that standard would allow us to “disqualify” a morality that prevented an agent from making any choices, but beyond that we haven’t established any measure against which to prefer one set of choices over another.

I agree with the second sentence (with the caveat about biological association). The first sentence, however, I can find no reason to accept as axiomatic.

Pretty well, I think. I feel we are approaching an understanding (if not an agreement.)

I can’t resist disagreeing with the majority opinion. Morality is universal and constant and therefore not a human construct. Morality is as existent, where two sentient beings are present, as gravity, where two bodies of mass are present. If one does harm to another through purposeful action in any degree they may not understand their lack of morality. However, the one to whom harm is inflicted will be aware without exception that the action is wrong.

Human beings get twisted up, thinking about what is right and what is wrong, taught by culture and dissolute in religion, but morality is simply defined: do not harm another.

Like gravity, morality does not exist in a vacuum, but it exists.

However, I do enjoy Pervert’s definition of morality
“- A rational being must have a tool for making choices in order to make choices.

  • Morality in general provides such a tool.
  • Specific moral systems can be judged on the basis of how well they serve this purpose.

I am proposing that the value of an individual’s life amounts to the ultimate value which must be held by that individual for his moral system to be considered to serve this purpose well. The conscious being cannot be said to make choices if he is no longer alive.”

My only disagreement is that I believe a specific moral system can only be judged by how well it avoids doing harm to others. We cannot judge our own morality as individuals unless we observe the effects of our own behavior on others.

As a Popperian utilitarian, my proposed “morality” (ie. the “law of suffering”) is that suffering should be minimised overall, ie. minimal suffering = “good”, maximal suffering = “evil”. (I am, incidentally, fairly sanguine about not arbitrarily restricting morality considerations solely to the consequences of the decisions of sentient calculating machines, but allowing ‘natural’ events to be described in such a manner also. Thus one might say the calculating machine called Hitler was “evil”, as was the Bam earthquake.)

From an enlightened self-interest viewpoint, a minimisation of suffering would reduce the probability of me suffering in future. If we can all agree that we don’t enjoy suffering and all wished it didn’t happen to us, and that those of us without a condition known as “psychopathy” feel some generalised empathic discomfort when we witness the suffering of others, I would suggest that this is a reasonable justification.

We can’t. Of anything. Descartes’ Devil might be delighting in our confusion as he feeds the nonsense called logic and maths into his brain-filled jars. Certainty is a myth. One can only agree with another what is certain beyond reasonable doubt. All moralities and philosophies have their “ah, but what about…” opints. Utilitarian physicalism is the only one without fundamental weaknesses, IMO.

Well, firstly, I don’t deny “consciousness” is a necessary semantic referent, though what it is exactly I’ve never been able to define well for myself. I guess I would take the lead of the cognitive neuroscientists who do observe physiologic or pathophysiologic conditions that alter concsiousness, as well as perform experiments using agents (like drugs) that are known to alter consciousness. Such work reveals the changes in brain activity correlating with changes in consciousness, and allows one to hypothesize what regions of the brain talk to what other regions, from the cellular level on up, to bring about the conscious state. I guess these hypotheses get lumped together as “globalist theories”, generally postulating the existence of a complex neural network linking limbic and cortical functions, etc. to yield the phenomenon. I guess I prefer these approaches/theories simply because they can be tested rigorously. I don’t think a concise and unambiguous definition of consciousness can be derived yet from such work, but there is the assumption, I think, that with greater understanding of cognitive neuroscience, consciousness itself might be reverse-engineered.

I think I would take the simple dictionary definition of morality: A system of ideas of right and wrong conduct. Of course, that’s just a bland description of the phenomenon of morality. I don’t think a deeper definition could be obtained without discussion of moratlity’s origins, which would then, reveal what kind of phenomenon morality is in a reductionist manner. I think I can better provide such a definition with more reading, so as to make certain I’m not talking out of my arse.

If you don’t mind, I’m going to concentrate on the discussion of the summary. I think it will avoid my posts becoming unwieldly. If any of the other questions you asked are skipped or not addressed directly enough, please ask them again and I will address them.

Of course. Consciousness itself could be described as such a tool. Morality is the tool which allows a consciousness to evaluate choices in a context larger than the imediate moment.

Also agreed.

Perhaps I brushed over it. The function of morality is to provide a valuation tool for choices made by a conscious being. One can postulate all sorts of moral systems. But unless they serve the function of a tool for making choices by a conscious being, they are less valid, less realistic than moral systems which do serve this function.

To take an extreme example, I could postulate a moral system based on the moral supremacy of flipping a coin. I could build an entire system designed to reduce any choice to 2 options and select from them by flipping a coin. So, whenever I had the choice to eat or not, I’d flip a coin. Whenever I had the choice to drink water or poison, I’d flip a coin. What I’m suggesting is that because this system is demonstrably not suitable for long term survival, it is therefore demonstrably not suitible as a moral system for a conscious being.

I agree with the second sentence (with the caveat about biological association). The first sentence, however, I can find no reason to accept as axiomatic.

What I mean is that a tool for making choices which makes it harder to make choices is not a very useful tool. A moral system which makes it harder to remain a conscious choice making being is not a very good, realistic, or useful tool.

If we agree that a conscious being must be alive to make choices, and if we agree that a moral system guides those choices, then a moral system which places the life of the individual low enough is counterporductive to its own function. That is, the function of morality provides a basis of evaluation (how well does a particular system fulfill this function) and at least one universal moral (the life of the conscious being).

I should point out that I am not necessarily saying that the life of the individual has to be at the very top. I think it should be, but for the purposes of this discussion I am trying to grant some wiggle room in this. Also, I feel I should point out, again, that I am not talking about life in the strictly biological sense. One could be hooked up to machines in a chemically induced coma and be said to be alive in that sense. Such a person, however, is no longer a choice making conscious being. I’m talking about life in the sense of an active choice making conscious being.

I would have to quibble a bit here. People can be convinced that many kinds of harm are not in fact harm. People do all kinds of self destructive things without realizing they are self destructive. I’m not really sure that everyone will realize that harm is being done everytime. Additionally, harm can be a very long term or with ranging phenomena. Something which feels very good can in fact be deadly poision.

Well, I disagree. I think it can only be judged by how well it serves the individual. However, I can address your concern.

Individuals do not live in vacuums either. Humans live in societies. A moral system which exalted the benefit of an individual regardless of, or to the detriment of the benifits of others would not be a very good system for the individual. That is in many instances, harming others can be said to be harmful to oneself. The noted exceptions would be when oneself is directly threatened. So, self defence is allowed, but murder is not. This goes beyond the scope of the OP, I am merely trying to demonstrate that having the life of the individual as the end in itself at the top of a moral hierarchy does not necessarily imply the sort of thing we often associate with selfishness.

As mentioned above, it is trivially easy to assert a moral principle and claim it is universal. It is considerably more difficult to present a convincing argument for why this assertion should be given serious consideration by rational beings. Are you up to that challenge?


The contrapositive of Mills? Okay. Have you given any thought to the metric by which suffering can be quantified across moral agents? (For that matter, do you extend the metric to non-conscious agents capable of sensation? It would seem that you might, given your willingness to consider non-conscious agents as “carriers” of morality.)

Am I correct, then, in stating that you consider outcomes to be the appropriate element upon which to exercise a moral valuation, and thus you consider morality to be a property of state rather than a property of action?

I don’t think you can make that case unless you both posit a uniform distribution of suffering through the population of “sufferers” (in which case it is trivial). I think my own experience with beings that can feel pain would lead me to reject any probability model of suffering that is perfectly uniform across the range of all beings.

I think you would extend the label “psychopathy” to every human being on the planet under that definition. (Slapstick and sports alone knock out a large percentage.) Or else you are drawing some line between “important suffering” and “unimportant suffering”.

Personally, I doubt even that much, but I don’t want to twist this into a debate upon specific epistemologies. I just wanted to highlight the inescapable element of uncertainty in moral valuations (at least those made by human beings.) In other words, I think that any treatment of morality that does not address the question, “What makes us confident that this answer is correct?” will necessarily be incomplete.

Any argument to that effect, of course, can be quibbled to death over epistemological uncertainty. There has never been a rational refutation of nihilism beyond: “I choose to proceed otherwise.” But without descending to those depths of futility I think it is still useful to unerstand the foundation upon which an assertion of moral principle is built. Case in point . . .

While I would argue that it is one of the worst contexts possible from which to evaluate morality. A few of my concerns are:
[ul][li]Arbitrarines: If moral valuations are applied to a state, then what privileges one time of evaluatin from another. How does one answer the classic “would you shoot Hitler as a child?” dilemna? At what point does one declare the “ultimate result”?[/li][li]Inutility: I moral valuation is applied to a state of affairs, then conscious agents with limited knowledge can never know whether their decisions will result in “good” outcomes. Since we lack a perfect knowledge of the future, Utilitarian physicalism provides us no reliable guidance in choosing to effect a moral outcome. “The road to Hell . . .”[/li][li]Imprecision: If valuation depends upon a summation of all suffering across the field of all “sufferers” affeted by a decision/state transition, then we are handicapped by our inability to accurately or precisely aggrgate that value. In effect, even if we restrict ourselves to a relative measure we will never have a meaningful level of certainty in declaring our moral valuation. Only in the most extreme examples can we feel comfortable asserting, “State A causes less suffering across all sufferers than State B”, and even then the comfort is based more upon a faith that we have identified all interconnected consequences than on any precision in our measurement.[/li][li]Projection/externalization/impracticability: Suffering is an internal property. When we attempt to aggregate suffering across the total population of sufferers, we are blocked by our inability to apprehend or measure the internal state of any other “sufferer”. We are left, then, with only three options: Projection, in which we generalize our own internal sense of suffering to the total population of sufferers; Externalization, in which we declare an objectively measureable property to be a sufficient metric of suffering; or Polling, in which we ask each sufferer in the population to provide their own subjective measure of suffering for the state(s) under consideration. The first of these answers is flawed by unjustified eegoism. The second is flawed by an inherent inaccuracy. The third is both wildly impratical and restricts the field of sufferers arbitrarily to those who are able/willing to respond to our poll.[/ul][/li]I could go on, but I think that covers the major territory. I leave open, of course, the possibility that your particular flavor of Utilitarian physicalism is phrased in a manner that addresses one or more of the above points.

Folks, I can’t keep up with the pace of responses right now. I’ll respond to the more recent posts tonight (I hope).

Your entire post was pretty good, but I wanted to say that when I read this I thought ‘why didn’t I ever look at it this way?’.



I probably should have been more specific in the question. Let me try again. What I am trying to determine is whether you agree or disagree with the proposition: A decision made by a consciousness can alter the observable behavior of the conscious agent. I other words, do we make conscious decisions and then act upon them, or do physical reactions take place that fully determine our behavior and create an illusion within the emergent consciousness that a decision has been made?

I ask because the issue to me is central to the idea of morality. In fact, absent a which conscious agents make choices I would say that the word ‘morality’ is semantically empty.

Sure, and a transition between media can alter the velocity of light. The ability to alter a thing is not the same as the ability to generate a thing.
(I probably agree with you, though, on almost all details regarding the emergence of consciousness from a neural framework. To me the very key and unanswered question is whether the flow of “control” in a conscious meta-agent is one way (as in a traditional cas) or whether the phenomena of consciousness can have a real effect upon behavior (as opposed to the illusion of will).

Certainly we are a long way from it at present, though we know vastly more than we did a few decades ago. I am still stumped by the idea of finding an adaptive systems model with simple low-level agent behaviors that will result in an emergent consciousness that “thinks” to itself – When the clock reads ‘6:30’ I will call my wife and ask about picking up dinner. Absent a mechanism by which the thought really can affect my behavior at 6:30, it seems a very difficult epiphenomena to explain through aggregation.

Well, I’m afraid I see a circle in your approach. If we do not define morality precisely, do a study and see what we can delineate according to our model that seems kind of like morality, then declare that morality is what we have found . . . Well, I fear images of Columbus finding Asia spring to mind.

If you would rather not settle on a definition, perhaps you could at least tell me what type(s) of element(s) you see as basic to a moral formulation?


Okay, so it does indeed to be a functional description that you are referencing. I might quible just a bit with the phrasing, then, if we can both agree that morality functions whether or not the conscious agent makes the choice morality values most highly. (In such a case, the “defect” lies in the decision making process, not the moral valuation.) With that in mind, I would suggest: "morality provides input into the decision process of a conscious agent."

Does that seem to accurately capture your position?

Here, again, I cannot follow you. Your only functional test is that a morality must assist in decision making. Flipping a coin demonstrably meets that standard. The idea that thedecisions must result in the continued existence of the conscious agent is both itself a value judgement (and from what can that judgment be derived?) and a test that every conceivable moral system will fail at some point (at least until all conscious agents become immortal.)

I disagree. Or, more precisely, I see no reason to agree that your statement always holds. One could postulate, for instance, a tool that made it harder to arrive at a decision but which provided greater certainty that the decision would be correct. In fact, we have several such tools in our epistemological toolbag: logic, empiricism, etc.

I disagree. Your statement would hold if we had postulated that the function of a morality is to allow a conscious agent of choice to continue being a conscious agent of choice. We did not. We agreed (I think) that the function of a morality is to informthe decisions of a conscious agent.

Your test appears ill-matched to your functional description.

As a hyypothetical, this test would appear to determine that a morality that impels a person to sacrifice their life to save 10,000 others is inherently flawed, while a morality that impels a person to steal, pillage, and plunder to gain the means for prolonging his existence is functioning well.

Is that in accord with your idea of a universal morality?

I think I made clear above why I think this conclusion is unfounded. If not, can I ask whether you are comfortable with the symbols of formal logic? I think if we were to structure your argument in formal symbols, you would see that there was a term in your conclusion that cannot be derived from your premise.

Perhaps a rather silly example will help illustrate. The purpose of a hammer is to drive nails. Imagine a homeowner who uses his hammer to drive every nail in his house so thoroughly that he never need pick up his hammer again. Has the hammer failed in its function?

That is close enough for our purposes. I would add that it provides a specific kind of input. That is, it provides input into the decision making process of a conscious being in order to allow momentary or short term choices to serve longer term goals.

No, you cannot drop the context of the conscious being.

No, they are a result of the fact that I want the tool to be a tool for choice making by conscious beings. I am not judgeing the value of the life of the conscious being seperately from its attachment to the moral system in question.

I think perhaps you are confusing my selection of an example value my idea of what values are for. I do not mean that a moral system fails unless it prolongs a being life. I mean that a moral system fails unless it aids that being in living said life. Here again, I am not using life in the strictly biological functioning sense. I can imagine many scenarios, for instance, where I would gladly sacrifice my life to save or enhance that of my children. But notice that this is true almost exclusively for my children. Replace them with some random stranger from the other side of the world and my conclusions will be different.

The word “harder” is probably not sufficient for my meaning. I did not mean simply that decisions would be more difficult to make. I meant more precisely that decisions might not get made in accordance with the purpose of the being making them. That the tool might not be serving its function.

Right. But not any choices at any time in any way. Morality is a tool for providing the context to immediate choices. That context is the life of the conscisous being.

No. Go back and look at my post to ericboyd. It is somewhat outside the scope of this thread, but I suggest the life of the conscious being needs to be near the top not necessarily at the top. For instance, If we are talking about a system where you propose that any individual’s life should be sacrifieced to apease the gods so that some random group of strangers might have better crops, then I might agree that the system is indeed flawed. If you mean a soldier giving up his life to defend his country, on the other hand, I would suggest that it is not flawed. But notice that what I did was introduce the individual back into the system. The soldier defends his country.

Well, I am familiar with some of it. It has been a long time. I’m not sure that we could show the conclusion to be either true or false. I’m not sure that the premises and relationships can be easily represented.

Well, it no longer serves the function of hitting nails. If the function is to hit nails in this one house, then no. The hammer has hit all the nails which needed hitting. I’d say it served its function quite well. If, however, the hammer somehow destroyed the house, I might say that it failed miserably. In both cases no more nails (in that house) need to be hit. But in my example they do not need to be hit for an entirely different reason.

Now, if you are suggesting that a conscious being could make choices which allowed him to stop making choices and yet not end his existence as a choice making conscious being, than I may have misunderstood you.
Let me try and sum up again.

What I am suggesting is that morality fulfills a function as a guide to the decision making process of conscious beings. Specific moral systems which do this well I will call realistic or based on reality. My contention is that one feature of such a moral system is that they have the well being of the conscious being at or near the top of the heirarchy of values. I do not mean that they are exclusively tailored to benifit the individual at the expense of everyone else. I mean merely that they are designed to serve the purpose for which the conscious entity makes choices. And that this means they cannot be directly harmful to that being.

I suppose I have to add some stuff about why a conscious entity makes choices to sustain his life?

Spiritus: I hadn’t planned on this becoming an ‘Ask the negative utilitarian physicalist’ affair, but I’m game for a little fencing practise I suppose (although you might well find some of my parries a little clumsy and unconvincing for your tastes!).

If suffering is a physical entity which may be defined by neurophysics and pain receptors etc., I would suggest an essentially medical metric across agents capable of suffering. Just as professionals at a hospital must judge how to minimise suffering (including mental illness) most effectively given finite resources according to specific medical criteria, one could define the moral good to be that which minimised suffering according to those medical criteria overall. This, at least, seems to be vastly preferable to the nebulous ‘happiness’ espoused by classical utilitarianism. I would suggest that happiness is what is left when all reason to truly suffer is removed.

Given that animals possess largely similar pain receptors and nerve systems there seems no reason why a dog fulfilling those same medical criteria as me when I suffered should not suffer also, so yes. (I would deal with the question of relieving animal suffering in the wild by proposing that attempts to minimise wild animal suffering might well actually increase it overall since it might disrupt some finely balanced ecological equilibrium - in any case, as I’ll explain, my position is strongly dependent on addressing certain cases first and leaving such hypotheticals to the far future.)

Outcomes? Yes. “State versus Action” is a little simplistic; a state of minimal suffering is good, as is an action which brings about such a state overall.

There is a distribution of suffering among potential sufferers (ie. everyone). Given that I am one of those potential sufferers, any distribution engendering less suffering (be it one person’s suffering diminishing or a quantum of everyone’s suffering diminishing) makes my suffering less likely overall (since I might be that person in the former case).

Well, we could perhaps start with a certain agreed threshold, at least. I think we would find very few beings without a mental illness (which I categorise as suffering in itself) who would not call torture-to-death or advanced stomach cancer “suffering”.

And here we diverge. Could one engaged in sport or being hit with a custard pie really be said to be suffering? As I say, I raise my threshold far above this, at least to begin with - considering voluntarily risking certain physical pain comes far below trying to eliminate involuntary and acute pain requiring medical attention.

I cannot agree more. We can never be certain of the moral consequences of a given action regarding whether it will increase or decrease suffering over all time, any more than we can be certain that a given zephyr will produce weather comprising more or less rain. Yet I can still describe a sunny day.

One can always find reasons for one’s confidence to falter in any subject - morality is merely so complex a consideration that those reasons are rather easier to find using the standard “ah, yes, but what about…” tools. One must simply weigh up which general model one feels most confident in. Given a climate model I am confident in, the question “ah, but what about ball lightning?” does not lead me to discard it completely. So it is that my model of morality requires amendment for particularly tricky conundrums, which I will endeavour to address here:

Never, as we can never declare the definitive effect on the weather of that zephyr. All we can do is make a ‘best guess’. I contend that that ‘best guess’ involves an intent to minimise suffering in our actions. I tend towards rule utilitarian rather than act utilitarian, and so I would suggest that justifying shooting people on the basis of a suspicion that they will cause future suffering will cause more suffering than waiting to see if they did. So no, I wouldn’t shoot Hitler (indeed, Germany might have won without his incompetence - we might as well discuss the weather here on this day next year).

“Reliable”, or “perfect”? One might as well say that predicting the weather is impossible. I contend that reasonable reliability can be attained in both weather prediction and outcome prediction. One might hypothesise two towns, one in which nobody’s actions were intended to minimise suffering and the other in which everybody intended so. Can we predict with certainty which town engendered more suffering in the long run? Of course not - we must again ask ourselves what is reasonable.

Agreed. Let us first concern ourselves with extreme examples and work from there.

“Inherent inaccuracy” in the second? Sadly so, but is this a reason to dismiss it out of hand? Does the hospital administrator simply discard the medical judgement of those professionals regarding how to address the most medical need with those resources because they are ‘inherently inaccurate’? Again, the perfect is becoming the enemy of the [sub]ahem[/sub] ‘good’.

I am happy to further explore my personal choice of moral framework, although some of my justifications appeal to some vastly hypothetical consideration of generalised suffering over long timescales (particularly in regard to things which I feel retard ‘progress’ and thus leave eg. future sufferers of a medical condition, which might otherwise have a novel treatment, in unnecessary pain), and I’m sure one as erudite as yourself could have me perched on the horns of a very uncomfortable dilemma in no time, Spiritus. However, I’m a physicalist first and a negative utilitarian second - I’m perfectly willing to entertain other models of morality and change my mind.

I’m not sure I understand this. Are you saying that the voluntary actions which brought about the suffering change the suffereing in some qualitative way? Or have you introduced another element into moral judgements?

Not qualitatively, no. I’m just proposing that genuine ‘suffering’ exceeds some notional quantitative threshold such that no human being of sound mind would genuinely ‘volunteer’ for it. (ie., if it’s mild enough for someone to endure willingly, one couldn’t really call it ‘suffering’ at all). Admittedly, such threshold placement is ultimately arbitrary. However, I consider all models of morality to essentially boil down to some arbitrary ‘it’s good because I say so’ element.

This may be why you and I have gone round and round on this notion. I’m not sure I have understood this portion of you position in the past.

For me, I can concieve of no such level of suffering. I certainly don’t see any such level being easy to define. Human beings have an almost unlimited ability to interpret sensory input as enjoyable or not. I’m sure you could look into some very odd practices concerning pain and humiliation on the web which would demonstrate this.

Well, this is the essential question of this thread, isn’t it. If there is no moral statement at all which you would say is not arbitrary, then you are answering the OP that morality is simply a social construct with no basis in reality.