Yes, I think that is possible. I can see situations where two people attempting to maximize the common goals of their nature come into conflict. However, I think that any morality based on reality in this way would need a recognized (and recognizable) way to resolve such conflicts. If they both had such a mechanism, I think the conflict would not be as diametric.
I agree with this approach. I think you need to spend more time defining some possibilites of what this quantity is, however. Allow me to ellaborate.
I think this proposes a characteristic of moral agents that is not in evidence. The definition that I offered suggests that moral agents use morality to decide if things are right or wrong. It does not require that moral agents are unable to make choices based on other things. We may have to prove such a thing.
This is where we are running into trouble with the word happiness. Its standard definition is getting in the way of the way you are trying to use it here. I understand exactly what you are saying here if by “happiness” you mean happiness: “state of well-being characterized by emotions ranging from contentment to intense joy”. If however, you mean some undefined thing which all people seek to maximize then I am not entirely sure what you mean. Why is it possible for this undefined thing to be in such a state of conflict.
How is it that person A does not recognize the potential for cooperation with person B? How can the minimizing of person B’s moral quality provide an increase to person A’s?
Do you see the problem? if we are talking about happiness then this is a perfectly valid formulation. But if we are talking about “happiness” then it may not be. We simply do not know enough about the characteristics of “happiness” to judge.
But your formulation suggests that moral agents cannot act against their moral systems. This means that if a person does something that another person judges as bad, he is also judging the moral system as bad.
I think this is the fist step in the oversimplification that is getting you into trouble. Moral systems cannot be thought of as maximizing a single thing. They are complex hierarchies of values which are interdependant. In the context of this thread, we can legitimately talk about moral systems which put one or another principles at the top or bottom. But I think it is a dangerous oversimplification to suggest that they can be reduced to a single value.
Unfortunately, when you choose to use “happiness” as the label for this value the over simplification becomes even more over simplified. I think you may have re examine this part of your formulation.
Don’t ever stop doing this. It is important to note the limitations of it as you have. But it is also important that a theory be translatable into some context which makes more common sense. I like to say “If we cannot state the idea in a plain english sentence, then we do not really understand it.” It is all well and good to point to a foot thick manifesto and say “I agree with that.” But if we cannot summarize it, or apply it to other contexts, then we do not really understand it.
This is not to say that simplifications are always perfectly accurate. But something about understanding a complex concept is lost if we cannot simplify it at least for a specific context.
FTR this is not true. Your positing of choice maximization is a very good gneralization.
Ah, but only if it can do so based on a morality, no? One could propse an agent who continually sought out pleasure. Such an agent could be thought of as a kind of choice making agent. But I’m not sure it would really fit the definition of moral agent. Perhaps I am using the wrong definition. Remember, my definition is "Morality is a complex of concepts and philosophical beliefs by which an individual determines whether his or her actions are right or wrong." It requires the moral agent to be able to process complex concepts, philsophical beliefs, choose based on applying them, and understand the ramifications of right and wrong based on them. This to me implies a greater capacity fo concept formation that kangaroos (to my knowledge) possess. I am more than willing to see some experiment which shows kangaroos, or any other animal for that matter, capable of such concept formation.
I think this conversation is a demonstration of that.
Yes.
Well, be careful. It would have to include this ability. But it does not have to “be” it. It is not equivilant to “whole life”.
See, this is where we fall into trouble again. You are assigning “intention” to the actions of the single cell. I’m not really sure that this is appropriate. The cell takes those actions which its DNA have programed it to take. Over the millenia those DNA codes most conducive to survival have in fact survived. I’m not really sure it is proper to suggest that this is an “intention”.
This is one of the confusing parts of my attempt to attack the relativist position. I am attempting to formulate a framework which is itself not a moral system (and so cannot make moral judgements) but which is capable of judging (in a non moral way) moral systems. This seems contradictory to some. But it is necessary to avoid the moral relativist claim that there is not way to preference moralities except from within moralities. It is also necessary to avoid seeing reality itself as a moral agent.
No, that is not what I meant. I meant that a living organism wants to change the world such that it lives. It does not live in order to change the world. I agree that making changes is tied to survival. It is inherent in the idea that life requires constant action.
But some choices reduce the number of choices. When I study physics, medicine becomes less of a possibility. At some point you have to value certain choices over others. You cannot simply count the amount of choices and say whoever has the most wins.
Well, certainly in some contexts the differences are only degree. But from that perspective the differenc between a living person and the 200 pounds of elements that make him up are only a difference in degree. At some point a difference in degree becomes a difference in quality.
Yes, to some extent what I was talking about can be explained this way. But it also true that the experience they feel is one of being choiceless. Being totally taken care of. Like a baby, if you will.
Yes, but the syndrome can go on for long after this changes. The only thing I am suggesting is that human brains are very adaptable. We are able to program ourselves (not easily, mind you) that pleasure is pain, war is peace, and all sorts of reality contradicting things.
Right. So we have to mean “happiness” in some sort of “whole life” way for it to work as our standard.
I was talking about external effects as those seen by others while internal effects are those only seen by the agent in question. I think I was misunderstanding what you meant by affecting reality. You are including what I was calling internal effects.
I did not say “nothing can be proven because axioms cannot”. I said that the conclusions that one comes to from a logical progression depend on the axioms, and unless people agree on the axioms, their conclusions are going to differ.
“Axiom lock” is a personal insult? The fact that you and I are operating from different basic assumptions is a personal insult? And nonsense?
It looks like a factual description of the obvious from here.
Do you truly believe that the participants in this thread are all operating from the same basic assumptions about reality, or, if they are not, that those differing assumptions are irrelevant to the intractability of the argument?
You started with a moral system, one that you evolved and developed from basic protomorality using your own preferences and experiences. You can choose to continue to use that one, or you can modify it according to your preferences, or you can chuck it entirely and pick a new one if you prefer that.
My choices are not arbitrary to me; they are based on the system I use. My choices are as likely to be entirely arbitrary to someone else as they are to appear sensible.
A need to eat is not a moral choice by my book. The goal of eating may lead to moral choices in the long run, choices of how to go about it, what foods to prefer, and such things, but hunger itself is not a moral choice. If you disagree with this, we have further evidence of incompatible axioms.
But that’s presuming that both parties have the goal of eating the potato.
My brother can’t derive any nutritional value from potatoes (they go right through him), and in fact eating potatoes causes him tremendous pain and emotional distress in addition to its providing no benefit. (Celiac sucks.) If it were you, my brother, and the potato, there would be no conflict.
A Jainist (if I’m remembering correctly) is morally opposed to eating root vegetables. Again, no conflict, if the Jainist holds to their principles.
Someone who simply hates potatoes would have to weigh their hatred against hunger, and without knowing the system they were using for that, it’s not possible to judge which way they would choose.
Right. But you went on to say other things closer to my characterization than this one of yours.
Can we at least agree that standard english definitions are a good starting point? I’m not sure I heard you admit as much.
Yes.
Again, yes. Do you really not see the insulting attitude dripping from this part of our conversation?
Yes, or rather they are operating with assumptions which are close enough. I think this because we are not considering all of reality. Only a very small subset. We are not debating whether or not logic is a useful tool for our purposes. We are not debating whether or not humans exist on earth. We are only considering the moral relativist positino regarding the existence or not of an absolute morality.
You seem to be the only one here with an intractible difference of opinion. And that seems only to be with regards to whether or not reason has any place in this discussion.
I have a moral system. I most certainly did not start this discussion with one. At the very least, I have been very careful to point out (for my own benifit if no one else’s) which portions of my arguments depend on accepting my morality. I most certainly have not started my argument with the sort of logic you are accusing me of.
You see, you know nothing about me or where my morality may have come from. As far as you know I recieved my principles and duties from a talking stone while fasting in the forest for 40 days and nights. This assertion on your part is simply you assuming facts not in evidence and attributing the consequence of them to me.
Again, this is your picture of reality. I tend to agree generally with your description of where morality comes from. However, you have offered not proof other than your assertion that it is actually correct.
Ok, so now arbitrary and non arbitrary have only subjective meanings? We cannot agree that there are other non subjective meanings to these words?
Of course not. I never said it was. A need to eat is simply a common goal. Something entirely different from a moral choice. Deciding to eat, or not; deciding to ingest poison or nutrition, on the other hand are moral choices (or at least choices, whether or not they are moral may depend on your morals). My only point is that needing to eat is a common need amongst humans. We don’t have to decide to eat the same food in order to recognize that we all have to eat.
Of course. And this is the crux of my argument. The need to eat, among other things, is a common need which is indicative of common goals amongst all human beings on earth. I am proposing that these common goals may allow us to recognize a framework which is not itself a morality, unique, absolute, and objective, and finally knowable by human beings. This framework would reduce the moral relativist claim that there is no such framework to falsehood.
Again, for clarity, I am not proposing that an absolute morality exists. I have admitted that I think one does, but I am not arguing such a thing in this thread. I am most certainly not arguing that any particular morality is in fact this absolute morality. For the purposes of this thread, I am only interested in the removal of the moral relativist critique as a roadblock to discussing morality.
As you see, I disagree only with the part where you suggested that I equated the need for food with a moral choice. I agre entirely with your characterization that the need for food may lead to moral choices farther down the conceptual hierarchy. That such a things exists, in fact, has been my only point througout this entire thread.
This is pretty interesting. The dilemma is that I may assume that a rock is a choice-making agent that always chooses to obey the laws of physics. I think the line definitely has to include some type of discernment by the decision-maker. This is what I am thinking: There is no way for the rock to choose badly by, say, falling up due to a misunderstanding of the laws of physics. On the other hand, your pleasure seeking agent could order a potato not realizing that potatoes are NOT pleasing to his body’s tastes. See what I mean?
Right, and this definitely cannot apply to a rock, probably can apply to a kangaroo, and I’m not sure if it could apply to an amoeba, but I think it could. The amoeba could envelop something that appeared to be food that, in fact, kills the amoeba. Thus, it made a bad choice that it “believed” was a good choice. The obvious question, to me, is, does an amoeba need a brain in order for us to describe this mistake as being based on a belief?
I’m not sure about this. Does the choice have to be complex? Does the chooser have to “understand” that it is making a choice, or is it enough that, say, an amoeba’s “beliefs” about how to make choices are completely encompassed in the rules of functioning that the cell follows?
I think you are assuming a “higher-order” moral agent, one that is able to consider much more complex scenarios involving changing reality. I’m not convinced this is necessary. The fact that the amoeba is not able to extrapolate from “opportunities to choose are good” any farther than “survival for another second is good” doesn’t doesn’t mean, to me, that it cannot be a moral agent; it just can’t be a very subtle one.
I can see where my descriptions make it sound that way, but that is not exactly what I mean–at least not in a way that assumes “survival” is good or that “opportunities to choose” are good.
Let me try it this way. I am saying that all moral agents have an absolute morality that says that making choices that “change” reality is good. I am not assuming that this is good, I am observing that, when you get to the very top of the chain, all moral agents do. Let me make an analogy with a silly argument. Instead of moral agents and such, I could be saying, “All Italians like pizza; you can’t claim to be Italian without liking pizza.” You might tell me to be careful that I’m not assuming that pizza is good or that liking pizza is good. And I would say, “No, I am saying that I am unable to think of any example of an Italian that does not like pizza, and I cannot imagine an entity that could both be Italian and not like pizza. I am suggesting that liking pizza is a definitional requirement of being Italian and that if someone claimed to be an Italian and not like pizza, I would assume they were lying about one or the other.” See what I mean?
I think the distinction here may lie in whether it is a conscious intention.
I really think that the stuff I have been describing meets these requirements. Primarily because rather than arbitrarily claiming that trying to affect reality is good, I am suggesting that all moral agents believe it to be good. If my suggestion is correct, it is both objective and, since all moral agents “believe” it, absolute (you cannot find or posit a moral agent that would “disagree”).
Unless you can find such a “base” absolute morality that they all “believe”. In which case, you can compare to what extent different ones reach their unique version of the same goal (and the uniqueness is solely the result of them being separate individuals).
I agree. I am suggesting that reality is a measuring stick that all moral agents “agree” on.
Here, we seem to be in opposition. I am suggesting that moral agents “want” to change the world and that, in most cases, survival is a crucial means of achieving this. In higher-order moral agents such as ourselves, we might be able to perceive and act on a choice that we believed indicated that survival was not the best means to achieve our goal.
I don’t know that life requires constant action, but I submit that being a moral agent requires constant decision-making.
Yes, but if you believe that physics gives you better tools for changing the world than medicine does, the choice is understandable.
Agreed. But if we follow the value “chain” to its origin, I think we will find that the root value is this goal of affecting reality.
Also agreed. How 'bout this though: While I have opportunities to choose, I am winning, and the more of these I have, the more I am winning. Makes sense?
No doubt. The question is, is THIS particular difference in quality the key one in defining moral agents? Self-awareness is clearly a powerful tool in evaluating and making choices, but is it a requirement?
I think you are making an implicit value judgment here about what kinds of choices are better than others. Your brain might conclude that this was a situation of very limited choices while my brain might conclude that this was a situation of limitless choices–look at the poor sap having to go to work every day and having to provide for his family; all his choices are made by The Man. I got somebody providing for me so that I don’t have to do anything except what I want to.
We can program ourselves in a lot of ways, but I would not say that we could decide that pleasure is pain. We could, however, decide that pleasure is bad. Seems like, in fact, many people have.
I think I have suggested a way to do this. I am becoming increasingly convinced of it–if for no other reason that none of these other guys have jumped in and pointed out a fundamental flaw. (If anyone has noticed one, this is your cue to jump in and stop me from wandering any further down a blind alley.)
When you are talking about evaluating a “whole life”, I think it is clear that I cannot know everything a moral agent knew, and I am biased, so there are definite limits on how well I can evaluate him.
Let’s say that you’ve spent your life lusting after 8 year-olds but have never once acted on this desire. When we talk about “whole life”, which of these two facts is important? Is this the type of distinction you are talking about, or are you referring to something else?
Right. I was giving an example where they are using the exact same framework. I agree with all of your examples to an extent, but I don’t believe that they use the exact same framework. Your brother’s framework assigns the value of “bad” to potatoes, primarily as a result of the effect they have on his body. What I am suggesting is that if you start “digging down” through all the complex “goods” and “bad” present in his framework, you will find the same “root” morality as the two parties in my example, i.e. changing reality is “good”.
The point was that even if everything in the entire framework (all the way up and down the chain) was exactly the same, the two would be unique solely by virtue of being separate individuals.
I have seen your attempts to characterise my responses as hostile, personally focused, and emotionally laden; I have tried to avoid reacting emotionally to them, because that is not productive or appropriate.
But I don’t agree that we’re operating from the same axioms. (Lock.)
We can start with your apparent belief that relativism claims a blank moral slate (the whole “How do I choose a system without already having a system” issue), which I do not share, and which seems to have been one of the major points of discussion. Or my belief that it’s important to distinguish between a morality (a system for making choices between good and bad) and a metamorality (a system for describing moralities).
And so we continue to go around in circles.
And you don’t know whether I got my principles of moral systems from a talking kangaroo; it is no more relevant to the conversation than whether you got your system from a talking rock.
I believe that you, like other human beings of my acquaintance, started out as an infant with a set of instincts and preferences, and developed an adult moral system based on those instincts (some of which you have likely in part significantly modified your responses to, with the addition of concepts such as “time”), your preferences, your upbringing, and your surrounding culture. That adult system is, I presume, a fully realised system, capable of handling moral choices that include whether or not to evaluate another system as superior and choose to adopt it as a better fit for your needs.
And here is the core of the axiom lock between relativism and absolutism.
Can we agree, at the very least for argument, that an absolute morality must not depend on a specific human being as its referent, that it must transcend individuals and encompass all people? (I will grant, for the moment, that we are only talking about moral systems as practiced by humans, and not get into the question of whether kangaroos have moral systems.)
In other words: looking at the question from the need to pick a universal, any particular choice of human to exemplify the system is arbitrary – it requires circular logic. (This person’s morality is the best known because it is chosen according to the rubric of this person’s morality, which is the best, because this person exemplifies–) (Or randomness, which is not really an improvement.)
However, at the same time, the systems that people have are not arbitrary to them; they are built up from their core understandings of the world, their impulses, and their instincts, to fit their needs. For that to be arbitrary, they would have to have a system without having a specific reason to have that particular system – which is not readily compatible with a presumption that adult humans have moral systems which they have developed to meet their needs.
Trying to talk about morality in general is operating at a different scale than talking about the specific moral choices of specific humans. Both relativism and absolutism, in their abstract senses, are trying to talk about morality in general – they’re both metamoral systems, describing the nature of moralities, rather than moral systems, addressing evaluations between good and bad.
But that is not an attempt to describe an absolute moral system – it is an attempt to describe an absolute metamorality, which is a different question.
An absolute moral system provides input about what choices to make; an absolute metamoral system describes what choices must exist, but not how to make them.
I would argue that if one thinks the other is behaving immorally, they are not operating under the same system (“A moral agent should strive to have sole possession of the potato”) but rather differing ones (“Person X-meaning-me should have sole possession of the potato” and “Person Y-meaning-me should have sole possession of the potato”). Inconveniently, yes, and acting as a direct threat to the goals by being so, but if the moral good is defined by both as “an agent should seek the potato”, not immorally.
If there IS a personal dispute here, I definitely don’t want to be on either side of it. Nonetheless, pervert, I am wondering if you are being ironic here. I have read Lilairen’s statements over several times, and while I can definitely see disagreement, I am not getting the personal insult part at all.
I think it is a problem throughout this thread that we are all working on some assumptions that we don’t realize aren’t agreed upon. From what I can tell, Lilairen is merely pointing out that you each appear to have some assumptions that are incompatible, not that there is anything wrong with it, or with you.
Language is a problem here. I tried to communicate the idea by using “I”. Suppose I take your sentence, “an agent should seek the potato”, and change it to “this agent should seek the potato”. This, to me, sort of communicates the point I am making, which is that a certain amount of subjectivity is imposed by reality or by the nature of being one moral agent among many. The fact that we are separate moral agents imposes a kind of uniqueness. In most examples, I’m not sure if I can describe this without making it sound like selfishness, but that isn’t what I mean. What I mean is more along the lines of “point of view” imposing limits.
In other words, a high-order moral agent could analyze reality and determine that moral good is represented by “an agent should seek a potato”, but the “base” morality that is inherent to the moral agent is in fact centered around that moral agent. This makes the agent’s opinions subjective, but if I am right that all moral agents are subjective in exactly this way, then this is an objective claim that could, theoretically, be verified or refuted.
Does that get us any closer to the same page, or farther away from it?
We are really going to get bogged down talking about kangaroos again.
No. I don’t think even the simplification of choice making agent applies to rocks. Choice making requires actions. Actions require internal stimulous. Rocks cannot act alone. No?
No, look again at the complex of concepts. It implies at least the ability to form complex concepts.
No. Some choices are simple. But moral choosers have to have the ability to make complex choices.
Yes, and no. The chooser has to understand that it is making a choice. Otherwise right and wrong disapear. The choices cannot be “encompassed in the rules of functioning” unexamined because then “choice” disapears. Choices implies actions. Actions do not prove choices.
But the amoeba cannot even concieve of survival as anything but an automatic response to the next percieved stimuli. Note that the rock does not roll down the hill until pushed. Neither does the ameoba “seek out food” in any but a random stumbling about way. Again, action does not prove that a choice was made.
Let me stop you right there. I don’t think you can show that all moral agents have a particular moral system or other. They may have a need of one. They may have a need of a particular one. But that does not mean that they have adopted this one. Again, I think this relies on the proof you have to offer that every moral agent indeed is incapable of making decisions against this principle. This seems too difficult to me.
Except that this analogy is not complete. You are claiming not simply that all moral agents need a freedom of choice. I agreed with you on that. You are claiming that all moral agents construct their moralities based on this principle as the highest of highs. The most moral of all principles. It would be like you claiming that liking pizza is what makes Italians Italians. That if I simply began liking pizza, I would be more Italian than otherwise.
Indeed. I get “*an act of intending; a volition that you intend to carry out *” for intention. This seems to me to imply consciousness of some sort. One could describe the actions of ameobas as purpose driven, or even as leading to an “intention”. But I don’t think you’d be meaning it in the same way we do when we say that a person picked up the ball “intentionally”.
I agree entirely that your formulation would seem to fit this description. Its just that I’m not sure we can prove that every moral agent does indeed follow this particular morality. We can assert it. We can even make a pretty good argument for it. But I think that all you have done is discover a way to interpret the actions of agents in a certain light. As above, I agree that choice making is an essential part of being a moral agent. I am simply skeptical that you can prove it is the highest part of all moral agent’s moral systems.
Yes, or some other framework from which to so judge moral systems. I’m not convinced that moral agents have to adopt any particular moral system. I am simply convinced that the reasons they do adopt moral systems provides a framework from which to privilege those moral systems.
Right. But you are going farther and suggesting that a specific principle is necessarily adopted by all moral agents. This seems difficult to prove.
Those two sentences of mine are badly worded. I agree that this is the crux of out disagreement. Everything else is just fluff by comparison.
How about I rephrase them this way. I am suggesting that living moral agents need to live thier lives. In order to do this they need a mechanism or tool which allows them to choose from the unnumbered choices available to them. Morality is the tool which allows them to make choices in the context of their “whole lives”. Therefore moralities can be judged based on how well they fulfill these needs.
Physical survival and freedom of choice seem to necessarily be a part of such a morality judging framework. However, I am not sure exactly what relationship they have to each other. I simply think it is a much simpler formulation (no need to prove that all moral agents have adopted a certain morality) to say that the life of the moral agent provides a framework to judge moral systems. That way we do not need to show whether survival, freedom, or any other particular principle is indeed the highest of highs. We only have to show that a particular principle can be used by all moral agents to judge moralities. We don’t have to propose the most important of these. We only have to show that one exists which is objective, absolute, unique and knowable. If I am not mistaken, your formulation has the most difficulty with the last bit.
Yes, we agree that moral agents need to make decisions. But does that mean the poeple need to make decisions? Are people born moral agents? What I meant by sayinig that all life requires actions is that all life requires actions to sustain it. Do you know of any life forms that can sit completely inactive forever?
Yes, but here we are talking about qulaity of changes instead of amount of changes. What I am saying is that some choices remove from the realm of possiblities other choices. Studying physics removes not only medicine, but all other specialities (ok, not all of them but the vast majority) as well. Judging by a “maximizing choices” standard, I could be said to be reducing my choices. No?
Yes, you are clearly postulating that this is the absolute morality that all moral agents have to adopt. I simply think it will be an uphill struggle to prove it.
Again, yes it is internally consistant. But I do not think it covers enough ground. It is still susceptable to the argument I gave above. sometimes the quality of the changes an agent wishes to effect are more important to him than the number. He is willing to give up vast numbers of choices in order to have one particular one. For instance, I am willing to give up the chance to sleep with vast numbers of women in order to have the chance to sleep with my wife. It seems to me that from a strictly “number of choices” framework this would be a bad decision. I don’t particularly know if it provides proof that I have not adopted your moral system, but it might provide evidence that your moral system is not robust enough to cover some pretty common behavior.
I’m not sure. My instinct is to say yes. You have to have a certain amount of self awareness in order to unerstand right and wrong in anything like the moral sense. Ameobas can understand them in the pain pleasure sense, but I don’t think this is the same thing.
No, I am not trying to do that.
Ok, I was avoiding this because it will undoubtably resultin unwanted jokes at my handle. The example I was thinking of was more specifically those devotes of the domination and submission life style. As I understand it, submissives are willing to give up virtually all control over thier actions submitting to the will of the dominant (or at least that is the fantasy which the “scenes” are aimed at living out). My only point being in that some people may actually prefer choicelessness to choicefullness.
I’m not sure I suggested that people can redefine pleasrue and pain. I merely suggested that people are capable of experiencing pleasure through pain. Some people, in fact seem to need pain in order to experience pleasure.
You have indeed suggested one way to measure this. I am not sure that you have hit the nail on the head, so to speak, but you have definately suggested one way to measure happiness in other than a disconected moment to moment experience.
Quite. This is the biggest problem I have with your formulation. It requires that we know something which may be unknowable. Namely that all moral agents do indeed adopt your choice maximizing morality.
I don’t think either of them matters when we are trying to define what I mean by “whole life”. Clearly the only thing that others could notice is that no act was committed. But I’m not sure that would enter into any such evaluation. I’m not sure, therefore, that it is relevant.
I’m not talking about “whole life” to mean only those things that other can see. I only brought up the idea of evaluating a life because that is a context in which we use the word “life” to mean something other than a collection of disconnected experiences or effects. Somehow, on the other hand, when we use it in discussions like this, it seems to mean exclusively only that disconected set of experiences or effects. I’m only suggesting that “life” could mean something more expansive in the context of moral decisions as well.
The preceeding post was brought to you by “What I Came up With.”
Ok, maybe its just me. I have not meant to offer any offense to you either. I have, I think appologized for any percieved offense on two occasions. Allow me to offer another and say that I will simply ignore this perception on my part that you are insulting me. I will simply accept that it is my perception and not actually real.
Yes, it is a lock. But it is you who is closing it. I am willing to discuss any of the posititions I have taken in this thread. For instance, which axiom in the paragraph you quoted do you not agree with?
That the people in this thread are operating on axioms agreed to or agreed to “close enough”?
That we are not (in this thread) considering all of reality, but only a small subset?
That we are not debating whether or not logic is a useful tool for our purposes?
That we are not debating whether or not humans exist on earth?
That we are only considering the moral relativist position regarding the existence or not of an absolute morality?
I honestly cannot imagine that you are disagreeing with any of these. I think I may be misunderstanding you again. Can you ellaborate on the axioms you are disagreeing with?
This is outstanding. I would be very willing to be proven wrong on this point. Can you provide a description of relativism that is different than my characterization? Provide examples from dictionaries or encyclopedias if you can. I don’t mean this to be snarky. But I spent the first 2 pages of this thread trying to come to an understanding of relativism. I think I have done so with the help of my friend erislover. I don’t presume to speak for him in any way. But I think I have understood very well that moral relativism holds that the only way to make moral judgements is from within a moral framework. And that no moral framework can be privileged above others.
"Moral relativism is a view that claims moral standards are not absolute or universal, but rather emerge from social customs and other sources. Relativists consequently see moral values as applicable only within agreed or accepted cultural boundaries. Very few, if any, people hold this view in its pure form, but hold instead another more qualified version of it."
And
“Moral relativism stands in contrast to moral absolutism, which sees morals as fixed by an absolute human nature (Jean-Jacques Rousseau), or external sources such as deities (many religions) or the universe itself (as in Objectivism). Protagoras’ notion that “man is the measure of all things” may also be seen as an early philosophical precursor to relativism.”
Please let me know how my characterization of relativism is incorrect.
I agree. Forgive me, I do not want to sound insulting, but in your last post to me you made several assertions about me and my morals. This was the source for the paragraphs you quoted. I would like to note that I have never once made any assertions at all about where or what your morals are. I would also like to point out that you are making assertions about where my morals come from again in the next paragraph. I don’t mean that accusitorially, I’m just mentioning it.
I may believe you as well. However, I believe that some of the concepts that I used to adopt my current moral system came from outside of the “default” moral system I had adopted.
Well, I don’t think that relativists and absolutists disagree on the meaning of arbitrary.
Of course. I said that way back on page 1 or 2. I will agree to it again.
I am not at all sure what you mean here. Didn’t we just agree that a universal morality would have to apply to all humans? It seems to me that you ar now saying that we have to use some particular persons morality. This does not seem circular, it seems self contradictory. I would point out that I have never once suggested that any absolute morality exists, much less that it is being practiced observably by an particular person.
I am at a loss to understand what "not arbitrary to them means. My dictionary suggests that arbitrary means “based on or subject to individual discretion or preference or sometimes impulse or caprice”. This does not seem to indicate that a thing can be arbitrary to one person and non arbitrary to another.
I agree with this thought. I have been saying all along that if moralities are used to fill needs common to all people, then they are not only objective, but absolute. This is the crux of my argument. I understand that you are not claiming that there is any need common to all people, but if we agree with this point of yours, we can move on to the proposition that there may indeed be some needs common to all people.
I agree with this as well. I beleive that the definition of moral relativism is that moral systems cannot be judged except from arbitrary frameworks. None of which can be privileged over any others. Meanwhile moral absolutists claim that there is indeed an absolute framework from which to evaluate moral systems. There are many flavors of moral absolutists. They all differe as to the nature of this absolute framework. I have not been trying to convince anyone of any particular framework except in so far as to convince everyone that some framework exists.
Of course! Thus my denial that I have been attempting to describe an absolute moral system all along. I have denied trying to do so in posts to you and several others. I have admitted that I believe one exists, but I have not made that part of my arguments here. In other words, the metamoral qustion is the only one I have been addressing.
I was going to agree with the first part, but I think I won’t. It seems unecessarily vague. Any moral system provides input about choices, sure, but they do so in a particular way. They do so in the context of placing those choices (or perhaps only the outcomes) within a hierarcy of values. An absolute system claims that its value hierarchy is true for all people. The relativist claims that there are no set of values that apply to all people.
I’m not convinced that an absolute metamoral system describes what choices must exist. That is the angle I am going for, I agree. Or at least I agree that I am attempting to describe some choices that I think do exist for all people. But it seems that several other absolute systems have been proposed which go at the issue from another angle. Religion, for instance, claims a moral system given from on high. I don’t think they acknowledge any particular set of needs (beyond those of the creator) which are being met. I agree, however, that you have made a start at a definition for the sort of absolute metamoral system that I am looking for.
Also, for the sake of clarity and because it has changed again, allow me to re state my thesis from post #153.
This is the theory which is mine, by me, ahem.
Given - Moral Relativism holds that moral judgements are relative to some particular framework or standpoint. And that no standpoint is uniquely privileged over all others.
Given - “Morality is a complex of concepts and philosophical beliefs by which an individual determines whether his or her actions are right or wrong.”
Given - A framework from within which moral systems can be “privileged” violates the second tenet of relativism with regard to morality if it:
is not itself a moral system;
is unavoidable, objective and, unique
is knowable but not necessarily known.
I am proposing that the nature of the “individual” in the definition provides just such a framework.
There are some necessary truths about this “individual”. The definition of morality assumes that such a being is capable of actions and determinations among other things (the capability to deal with a hierarchy of concepts and abstractions stikes me as one further assumption, but it is not necessary for this thread). This limits our set of possible existants to living beings at least (It limits us even further, but living beings is good enough for now). Without an individual who conforms to these truths, morality has no meaning. Therefore, a morality which does not put the life of this “individual” pretty high up in the “complex of concepts and philosophical beliefs”, risks becoming meaningless.
This suggests that moral systems may be privileged according to the relative position within them of the life of the individual. The only frame of reference necessary to do so is this definition of morality. In fact, this frame of reference is the only one suitable for such a task since it is implicit in the definiton of morality.
This framework is unavoidable (suggested by the definition of morality itself), unique (the only one suggested by the definition of morality), objective (relying only on the characteristic of the individual), knowable (we can certainly know the definition of morality and that moral agents are alive) and therefore absolute. Additionally it is not itself a moral system so it does not violate Relativism qua Relativism while at the same time violating relativsm as it relates to morality.*
This is a place where we can get in trouble with similar, but not equal, definitions. When you say rocks cannot act alone, I think this is risky. I am more comfortable with specifying that rocks cannot act based on volition. The rock can “act” in that it can fall to the ground, but it does so because you let go of it. And once you let go of it, the rock cannot choose not to fall.
I don’t object to the distinction you are trying to make, per se. I am just not convinced that this distinction is makeable.
Another troublesome statement. The choices are simple and straightforward. However, if you are trying to choose among them, the implications may be complex or difficult to evaluate.
I appreciate that this is sensible, but I think you are in the area of things that are unknowable and/or unprovable. When speaking of anyone but yourself, how do you determine what the chooser knows? Or, more to the point, how do you determine that the chooser knows. Is there any objective way of saying that the “knowing” going on in my neurons is any different than the “knowing” that is encoded into the amoeba?
Well, complicated notions of right and wrong (like “fairness”) disappear, but the simple base notion of right and wrong that I have described does not.
This is the crux of it. You are describing an amoeba as merely reacting to stimuli. As far as I know, every neuron in your brain can be described in the same way. Put them all together in a brain, and you still have something that reacts to stimuli. You can’t point to a single neuron that fires because it chooses to. Just to be clear, I am not denying that the whole can be more than the sum of its parts. What I’m saying is that the dividing line is unknowable. We suspect these things of each other because we can communicate a great many of our thoughts. We don’t know what to make of an amoeba or a kangaroo or a tree. To make this distinction, I think you are signing yourself up to the task of proving free will, and I don’t think this can be done (which is not the same as saying I don’t think it exists).
Right, but the rock does not “seek out” opportunities to roll down the hill in any way at all, that we know of.
For the sake of argument, what DOES prove that a choice was made?
I’m not sure that there is any requirement for this. In fact, if I could prove that they were incapable of choosing against this principle, then I might have just proved that nothing makes choices, in which case our notion of a “moral agent” is just a misunderstanding of physics.
Also, when I assert that all moral agents have a base abolute morality that requires them to “strive” for opportunites to choose, it is clear that I can’t point to any sort of logical proof of this assertion. I think of it as more of an assertion in the way that the law of gravity is an assertion. That is, we cannot prove the law of gravity, but as we keep trying and failing to find examples that disprove it, we become increasingly convinced that it is, in fact, a law. In the same way, I believe that, if we were capable of examining the entire moral “chain” for any moral agent, we would find this “absolute” morality at its base. This exercise would be damn near impossible in humans, but might be doable for amoebas, and possibly even for kangaroos.
I agree with your assessment except that I don’t know that there is a possiblity to have degrees of “moral-agentness”. Either you are a moral agent or you are not, in the same way that either you are pregnant or you are not.
Well, I think you want it to, and it does seem intuitive. But, from the outside looking in, how on earth do you define consciousness? I am imagining an alien looking down on the earth and not seeing much difference in the buildings we make and the labyrinthine constructions of fire ants.
I don’t object to this, I just don’t see any way for you to obectively define the difference. Basically, I would be rooting for you even while your definition was being ground into the dirt with reductionist arguments.
No, but we could disprove it if we found an exception. I don’t think that we will. I think that moral agents, on some fundamental level, “like” choosing and observing that reality changes as a result. I think it is analogous to, but not quite the same as, the way that humans “like” to continue living. When moral agents stop choosing, they stop being moral agents. If it were not a fact that moral agents like to make choices, there would be no moral agents (and, no, I’m not saying that this would be good or bad).
I think we’re agreed here. I am only suggesting that the “reason to adopt” is based on something that they all “prefer” because it is in their nature to prefer it.
I think you are hunting for something like a “higher purpose”, like, for example, peace on earth or loving your neighbor. I would suggest that these “higher purposes” are, in essence, rules of thumb adopted by moral agents in their pursuit of opportunities to choose. The fact that two might have chosen different approaches does not mean that they don’t have the same “deeper” goal for themselves.
I would say that what you are pointing toward is approximately equal to “the net results of all the choices made by the agent”. The length of its life is one of those results, and whether or not peace on earth was achieved is another.
I can’t imagine any way of showing that that this principle exists without positing what it might be and looking for violations of it. Otherwise, you have no real hypothesis to test.
My rejoinder would be, do you know of any moral agents that do not require “life” in order to act as moral agents?
Absolutely. What is important, though, is that, judging by the chooser’s understanding of reality, the chooser is in fact trying to maximize choices. The fact that the chooser’s brain is “entertained” by physics problems most likely plays into this.
Not exactly. I’m not saying it’s something they have to do. I’m saying it is something that makes them what they are.
I will note that the “quality” of a change is subjective. For instance, if there is one potato and two of us, my choice to grab the potato away from you and eat it has a profound effect on the realm of choices you have. If there are a million potatoes and I grab one and eat, you probably wouldn’t care at all. If we were looking at one potato and had no idea whether or how many other potatoes there are, we would not be able to “evaluate” the decision very well. In any case, I would suggest that this “placing of significance” on particular choices is just a result of our personal calculation of the effects the choice will have on subsequent choices, either for ourselves or for other moral agents.
Not necessarily. It opens up the choice of living with your wife, having children with her, raising them together. You have the choice of having sex at home without having to go hunt down a partner. Either choice eliminates possibilities and opens up others. The key is your perception of which of those choices are more preferable, and, ultimately, which of those open up the most future opportunities to affect reality. See what I mean?
I would be very interested in examples of common behavior that could be shown unexplainable by this principle.
I think it could be argued that this “sense of right and wrong” is a personal set of generalizations based on what the agent has learned (or been programmed to know by instinct) about the consequences of various choices.
Are you aware that submissives always have a “signal” they can give if the game is going to far? It’s like a code word, so that I can beg you to stop, and yet you will keep going, but if I give the signal, you will stop immediately. The presence of this signal means that I am, at every moment, actively choosing to submit.
I understand what you mean. What I am suggesting is that if we can examine this preference in its full context, we will uncover a lower-level belief that it is a choice that opens up options of some kind.
This is where I got the idea that you were speaking of some “higher purpose”. I am thinking of the “higher purpose” as being a generalized solution strategy–e.g. seeking peace on earth is a good strategy for opening up options, or some such.
Note: I skipped some stuff that seemed to be restatements of things that were already covered.
That’s fair enough. I meant that the rock cannot act without being acted on in a direct physical way from another thing. That is, it does not “react”, it is acted upon.
I’m not really either, and it is not a very important one. I’ll drop it.
Yes, perhaps. But the dividing line is not important. We can still look at the qualitative difference between the actions of ameobas and fully formed human brains. We do not have to be able to say that the 1 billionth brain cell is the one which causes the quality to change.
This is a good question. I’m not sure how I would prove this objectively. Intutively I think introspection and observation of humans and other animals is sufficient to claim that humans operate by choice while most other animals operate on instinct. I think it is sufficient to claim that instictual behavior is qualitatively different than choice or decision driven behavior. But if I had to give a sound objective definition I might fail.
I think I have already proposed two. They may need further examination, however.
No, not at all. In any reasonable measure, the number of choices has diminished. Can you explain the math being used by the physics specialist which suggests that his choices have increased? I understand that he can now entertain problems of physics, but he has chosen to ignore problems from a far greater number of disciplines. This seems to me a pretty straightforward reduction in his choice opportunities.
No, not at all. I understood everything until that last phrase. Just because the choices represented by my wife are preferable to me, how is it that they are greater in number than those represented by every other woman on earth? I could concievably keep my options open and have children with every other woman on earth, or even just 2 of them. This alone seems to be a greater number of potential choices than monogomy would allow.
Yes. In fact it is said that the submissive actually controls the scene. However, the experience they seek is one of submission. One of giving up choices. They do not achieve this in real life by and large. It is impossible to do in our society. However, it does point to a desire not to maximize, but to actually minimize choices. I agree that it is illusary in this case. I don’t think it is in the others, however.
I think we can concentrate on these three common behaviors for now. I think they show choices which in fact reduce the number of choices but which are in fact prefered quite often by people.
I’m not sure that you can get away with this. What it leaves you with is a definition akin to the famous pornography “I know it when I see it”. I don’t think you can move forward with what you are trying to achieve without getting these “definitional” issues nailed down. Otherwise, everyone argues in circles based on dueling assumptions.
Can you say that “introspection” and “observation” are not instinctual behaviors by humans?
I think you are confusing the results of the choice with the limiting nature of choice itself. In other words, you are acting as if it were possible to eat your cake and still have it.
This is true, but he did not have the option of studying physics, medicine, philosophy, and literature–at least not at the same time. Every choice eliminates all the options not taken. The key issue is the resulting set of choices once the step has been taken down the path. At the same time, not selecting physics is, in effect, not choosing at all, which eliminates all the other ones and eliminates physics as well. The simple answer I would give is that the person believes that physics is the kind of knowledge that will enable him to do more choosing in the future, once he has mastered it.
If I choose not to eat the potato, nothing changes immediately. At some point, I starve and no longer have any choices. Once I eat the potato, the choice no longer exists, but hopefully it keeps me alive long enough for another potato to come along. Is this helping?
You can’t choose every other woman on earth. You can only hold open the “possibility” of every other woman on earth. And until you choose one of them, there are many options not available. In the same way, if you choose to randomly impregnate women, you are probably losing the option to “control” how the resulting children are raised. In one case you are choosing to change reality by populating it with children. In the other, you are choosing to change reality by populating it with fewer children of a more specific type (raised by you). I would suggest that raising the children allows you to make changes to reality that are more “controlled” and, depending on your viewpoint, more far-reaching.
I think you are saying you could choose not to choose. While you wait, choices are becoming unavailable (women are getting married to other men) and you aren’t changing reality much.
I’m not convinced that this is so. Without getting too wrapped up in the details of human sexual cravings, I don’t see this as much more than choosing one kind of sex over another. And which kind a person finds exciting is wrapped up in the way their brains are wired and how they have grown to view “being in control” in human interactions. And I do definitely note that there is potentially a difference between “being in control” in the sense of a human exchange and in the sense of “changing the world” in general. Oddly enough, I am reading a sci-fi book right now in which the heroine is a “born” submissive who, while choosing times and places to behave as one, is having profound impacts on the world she inhabits.
I think we need to distinguish between narrowing down a current set of choices and producing future opportunities to make choices. If I am at a fork in a labyrinth, I have two choices. One way takes me into a cul de sac. The other takes me to an intersection with 4 exits. I think you are confusing the current choice (eliminating one of two choices) and future opportunities (cul de sac or 3 new possible choices). Note: I could also stand still, in which case my future choices are not really changing (except that I am getting older and increasing the chance that I might die before I get out of the labyrinth).
This is fine, but I think it’s important to confine ourselves to the expected results of the choice. I think that the labyrinth example is the most clear in terms of what I mean. If this is clear to you, I would suggest that explaining the taste for bondage would merely be a matter of explaining why the person thinks that bondage is the path that leads to the intersection and non-bondage is the choice that leads to the cul de sac. This, in fact, is what I have been trying to do.
Yes. But as I said, it is not necessary for my argument. We just got a little sidetracked with this subject.
Sure, I can even say them on one foot.
I did say I cannot prove the assertion. I also said I was not holding it as proof of much in this thread. Just MHO, if you will.
No, I think I am accusing you of this.
I agree entirely. I think your formulation requires that the nature of those choices opened must be to be greater in number than those choices closed off. I am suggesting that there are qualittative differences which must be taken into account. Specifically that such qualititative differences may disprove that everyone acts to maximize his choices.
But not “more” choosing surely. More than if he studied nothing, I grant you. but not more than if he studied a couple other things. And probably not more, even, than if he studied something else altogether. My point is that there is no good way to define this “more choosing” in cases like this.
No. I think we are getting our specifics and generalities mixed up again. I agree that eating the potato can be described as increasing one’s choices. I disagree that this fact can be generalized to every moral decision a moral agent makes.
I can’t certainly. (See, honey, I admitted that was not an option.) But just as in eating the potato, not getting married at a specific time might allow for more choices in the future. Getting married, on the other hand certainly disallows those other choices. Judging from a strictly number of choices basis seems to suggest that getting married is a bad choice.
Yes, but the more controlled part is a qualittative difference.
If you don’t mind, I’d like to drop this line of reasoning. We can concentrate on the others.
Ok.
No. I am not. Imagine, for instance that the cul de sac is instead a 10 way branch. Your system suggests that all moral agents take the 10 way branch portion of the fork, no?
I’m suggesting that some moral agents may still take the 4 way branch because they like the lighing in there.
Agreed. If we are willing to make explicite statements about what sort of generalities specific labyrinth characteristics are supposed to represent. I agree entirely that the expected results of the choice is what we should concentrate on. I would argue, for instance, that if the 10 way branch looks like a cul de sac from the fork youy systems still suggests that the 4 way branch is the one we take. Is that right?
I understand that this is what you have been trying to do. I have been trying to suggest that some people may, in fact, be happier, more comfortable, feel safer, or perhaps even simply prefer the cul de sac.
I’m going to jump back in here to answer this particular paragraph from my point of view. Hopefully it will shed some light on this point.
I do not believe that there is any goal that is common to all people at all times. Even breathing. Or eating, living, feeling happiness, having choices, etc. A person who has decided to drown themselves does not have a goal of breathing, they have a goal of not breathing. Less drastically, a person who is holding their breath also has a goal of not breathing. A person who is most comfortable following someone else’s direction does not have a goal of maximizing their personal choices; they may, in fact, want to limit choice as much as possible.
At any given point, if you attempt to state that a given goal is common to all people, I can find an exception. To me, for an absolute morality, or even an absolute meta-morality, to be valid, it has to be able to be applied to all people at all times. And I do not believe that this is possible. There are just too many factors to take into account.