This may be true in most cases. Bu I’m not sure it is necessarily true. I have known people, for instance who are only happy when they have no choice at all. It seems comforting to them.
I have been hinting that I am using the term “life” in a way that is not limited to biological functions. I have been promising to provide a more clear definition. I have been thinking about this for the last 2 days. I am ready to give it a try. Please be warned that I am not suited to this task. I think it requires a much wiser head than mine. Perhaps with your help I can finish it.
The life of a moral agent is not merely a fragment of isolated events, situations, and decisions. It a whole in and of itself. When we say that a person lived a good life, we do not mean that he lived a life completely free of badness. We mean that on balance, taking his whole life into account, it was a good one. This is a hint at the way I mean living a life on earth. It means living each day of that life as if it were a part of something larger than that day, hour, minute, second, or thought. It means living each moment of your life as if your whole life was an end in itself. The end, or purpose, if you will of each of those moments.
Implicit in this use of the word “life”* is the inclusion of the biological function of the agent in question. A life spent trying to end itself seems a contradiction in terms to me. However, the mere biological portion of such a life is also not the end or purpose of said life. Biological functioning is only a means to that end. That end, again, being the “whole life”* of the agent.
Take, for instance, a person who dedicates his “whole life” to the defence of his country. I can see instances where such a person could give up his biological life in order to carry out that purpose. And as I said in an earlier post, this does not seem to be a giving up of one’s “whole life”. Such an act, in fact, could be construed as serving the purpose to which said life was dedicated. It could be said to be in service of that “whole life”.
At the same time, however, it does not seem to me that all forms of suicide would qualify under this standard. Randomly killing oneself, for instance, would not seem to be in any way a furtherance of one’s “whole life”.
I am not sure I am totally satisfied with that formulation. Allow me to offer it on a prospective basis for now. I think it communicates the jist of what I mean.
It is in this sense, that I think the “whole life” of the moral agent might serve as part of an objective, unique and absolute framework from which to evaluate moral systems. I am not suggesting that it be the primary part of such a framework. And, in fact, I am only suggesting it because I think it is something all moral agents must necessarily have in common. As always, my main purpose in this thread is to reduce the barrier known as moral relativism to a discussion of morality.
- can I use “whole life” to deliniate the way I mean life?
** Yea, I think “whole life” works quite well.
yes, i think so.
i’m not convinced this is true in general. one thing to keep in mind when assessing the “happiness” of an individual is context. i think in certain contexts, your formulation of happiness might work out, but in others i think it fails. for example, some people (myself, for instance) are more or less amenable to a number of different options, and we’d prefer that someone else choose. this seems to clearly limit the choices we have, and by a conscious effort. this seems more of a discussion of what causes one to be “happy”, though, and might provide an objective measurement of “happiness”, but i still think it is a possible cause of “happiness” and not its equivalent.
once a moral agent dies he is no longer a moral agent, in my opinion. a will is something that will cause happiness during the life of the agent by giving him the knowledge of what will happen after his death. some people don’t believe in wills (or life insurance) because they don’t care what happens after they die. others look for “peace of mind” from these things.
when i was thinking about my attempt at a formulation of an absolute morality, i initially had a problem with the conflict resolution part of decision-making. i thought it would be ambiguous given that different people have different things that make them happy, and that it doesn’t provide a way for resolving the problem you (i think) presented with potato division. however, it could be considered unambiguous if we define a “right decision” to be the same in the same context. since everyone is satisfied to different degrees by different things, they are in a different context. also, knowledge that someone else wants your potato is also an addition to your decision-making context.
you seemed to agree that morality had a purpose to serve in reality and since reality was absolute and objective, morality should be absolute and objective. i said that in order for that “ought” to hold, it would seem to require that the purpose of morality extend to reality. since we, the moral agents, define morality, it seems as though it can have no purpose with regards to reality (it certainly doesn’t further the [nonexistant] purpose of reality) and it seems that the only purpose it has is the purpose we give it. thus, there is no reason to believe that the objectiveness and absoluteness of reality has any bearing on what objectiveness and absoluteness morality ought to have. they are of different kinds.
what i mean by “non-arbitrarily” is that all bets are off if the choice is randomly made. if we make a choice without thinking about it, we’ll hardly wish to debate whether it was a moral choice or not.
also, i am suggesting that a human moral agent will not choose the less-preferred option because he is incapable of doing so. if he chooses something that makes him less happy than another option might, it is because of imperfect information.
no, i am suggesting that his happiness is defined as the satisfaction he derives from the outcomes of his choices. granted, his happiness may be derived from other areas that have nothing to do with the choices he made, but i think we can agree that this should have no effect on his morality–it is simply further information to consider. the purpose of my proposition is not to define happiness, but to say that that definition of happiness (a sort of “happiness for the purposes of evaluating moral decisions”) is quantifiable.
that is more or less what i am saying. in my formulation, the only way a moral agent can perform an immoral action is due to a lack of information.
given a set of moral agents each with perfect information about the outcomes of their decisions and the degree to which they will be satisfied by them, each will behave morally and we will achieve some sort of equilibrium of happiness. some moral agents might think others behave immorally, and i’m not sure that destroys my theory as an objective and absolute morality. i think the different natures of moral agents makes it impossible that agents will all view each other as moral, even if they are following the same morality.
however, some people think money and only money will make them happy. they eventually find that this isn’t true, and they behave in many immoral ways to achieve that end. i think if people spent more time considering what it is that makes them happy, we’d have a much more empathetic (people make people happy) and content (if a bit sociopathic) world.
i too think i may be on the right track, and i appreciate any opportunities to make my theory more robust. i think it more closely approximates my own morality than anything yet discussed.
This is something I have been thinking about also. I don’t think you can describe a rock as a moral agent. If you observe 4 entities, 2 of them are rocks and 2 of them are kangaroos, you find that the kangaroos are doing things by choice but the rocks are not. If the kangaroo swipes the rock with its tail, the rock rolls some distance. There are no cases where the rock rolls without being swiped, or is swiped and refuses to roll.
In this case, I would say that 2 of the entities are moral agents and two of them are not. The rock is not a moral agent because it is not capable of making choices. The kangaroo is a moral agent because, having the ability to make choices, the kangaroo cannot decline to make choices (choosing inaction is a choice).
Given this, all moral agents have an absolute morality that it is good for them to have opportunities to make choices that impact reality. At the same time, each of these absolute moralities is unique in that it is centered around a different moral agent, with a different existing set of available choice opportunities. The moral agent is continually sifting and choosing different “subsidiary” or “relative” moralities based on the agent’s perception of how well they satisfy the absolute morality that it is good for them to have opportunities to make choices that “control” reality.
So far, I am an unable to think of any examples that disprove this. As you mentioned, in most cases, the best “relative” morality for the agent is one which keeps its organism alive. However, in the case of a soldier, by striving to choose the outcome of a war, the soldier believes he is making more far-reaching choices (winning freedom for his offspring; i.e. choosing what kind of choices they will have). It all translates into an absolute morality that approximates to “I want reality to reflect choices that have been made by me.”
And, in fact, would seem to be the ultimate immoral act.
Yes, the more I think about it, the fairness thing is just a consequence of trying to maximize opportunities to choose. It seems to be the only conflict resolution approach that moral agents would ever be able to agree to, since conflict sets their absolute moralities against one another. By making a trade, each is able to move forward without having to “lose” a net opportunity to choose.
And, at least for humans, I think being set apart from human contact would actually limit a lot of opportunities. For instance, there would be no opportunity to choose to have sex and no opportunity to choose to eat food that someone else made–the human would have almost no choice but to make food for itself.
I think I’ve answered this, but if not, can you ask it differently?
The way my thoughts are heading, I would now say “moral good is a goal for moral agents to seek when making choices. A moral system would be a set of tools to evaluate choices and determine which ones are better than others so that the goal is most likely to be achieved. And the goal is to make choices that affect reality.”
I think this fits right in: All is “right” with the world when the world reflects choices that have been made by me.
When they make no choices or when they make no survival-related choices? Suppose they have determined that they are not good at making choices related to survival and that the better option is to “copy” choices of another moral agent. For instance, I might choose to live according to the Bible because I believe that this will get me to heaven and thus guarantee an ability to keep making choices all through eternity? Or I might choose to live with my parents and devote myself to choosing what TV channel to watch.
Does this not mean that he made choices similar to the ones I think I would have made if I were him? And, to some extent, that he made choices that led to outcomes that I preferred?
Would it not be simpler to say that the life of a moral agent is the time during which the agent is able to make choices? Would it not also make sense to say that the “result” of that life is the degree to which reality reflects choices that were made by that moral agent? Is this what you are getting at when you talk about a “whole life”? In this sense, I would say that the measure of “success” of that life would be how much the moral agent was able to affect reality, and the measure of “goodness” would be the extent to which I approve of these effects.
Would you say that this is coming together with what you were aiming for or it diverging in another direction entirely?
-VM
No, this is not quite right. What I’ve been saying is that morality is a system of beliefs used by moral agents to decide right and wrong. I’m suggesting that since the beliefs, the moral agents, and the actions they take have to exist in reality, reality might contain a framework from which we can judge moral systems as appropriate or not to the task of being a morality. Furthermore, I am suggesting that since the moral agents have certain characteristics in common, they may in fact have certain goals in common as well. This might also lead us to a framework from which to judge moral systems.
What I’m saying with regard to the purpose of reality and morality is that while all of reality itself may be purposeless, the definition of morality may impose a purpose on it. At least as it is used by real moral agents.
I’m not certain they are of different kinds, but you are right here for another reason. I am not claiming that realities objective or absoluteness characteristics impose an absolute or objective character on morality. I’m simply suggesting that some characteristics which exist in various entities (moral agents, beliefs, actions) may suggest a framework from which to judge moral systems.
Mostly, but I think some hard relativists migh insist that “non random” is itself an arbitrary framework from which to judge morality.
Wait a second. Are you simply proposing a way we can measure the choices of an agent from the outside? As in if we see an agent choose one thing over another, we can assume that absent randomness the thing he chose makes him happier than the other? Are you not saying anything about the morality at all with this statement? Rather you are saying how we might measure the morality of others?
See there it is. I would be happier with a word besides happiness. Satisfaction is closer, IMHO. Google give me “state of well-being characterized by emotions ranging from contentment to intense joy” for happiness. My problem with it is the emotional part. I’m not certain that the emotion of happiness is useful as the highest measure of what is good for a particular person. Emotional states are far too transitory for that sort of thing. Choosing something which makes you happy now over something which could make you happier later seems like a valid choice under that formulation. I don’t think that is what you are shooting for, however. Is it?
If you are simply trying to propose that we can measure (to some degree, not necessarily completely) what an agent thinks (or at least a portion of his morality) by his actions, then I agree with you. I’m not sure it is necessary that we be able to tell very precisely at all what another’s morality is for the purposes of this thread.
That is, if we were to propose a framework for evaluating moral systems which was not itself a moral system, unavoidable, objective, unique, and knowable, and we were to develop a morality based on this framework which we felt was absolute, then I’m not sure that the complaint that we could not measure how well certain agents (or any agents) had adopted it would be valid. I’m not sure an inability to measure the depth to which a moral agent held a particular morality provides much insight into that morality’s absoluteness. I suppose if we were to propose collecting evidence from agents who were using it we might need to know how well they were using it. But I think that may be outside the scope of this thread.
Did I understand you point about only taking actions which make an agent happy, or did I miss it again?
But isn’t it possible for an agent to choose to disobey his morality out of spite, or some other emotional state? Or does this count as non arbitrary?
That’s the utopian ending to the story, yes.
No, not at all. I’m not sure it does, unless you are saying that some agents view other’s morality as bad even if they acknowledge that it is the same morality they hold. Then I think it might. If I hold a particular action (even “choosing my own happiness”) to be moral, I cannot hold it to be imoral when other do it without becoming subjective. That moral system becomes simply “The happiness of pervert”*. Rather than the happiness of the moral agent.
No, one of the conditions that erislover laid out earlier (post #110) is that any morality should judge itself as good. If the moral agents recognize that the other moral agents have adopted the same morality, they should judge each other good.
Be careful about this track. Money does in fact make some people happy. Being alone does it for others. “Some folks want to abuse you, others want to be abused” as the song goes. I think if you found another word, like benifit, or profit (mental or spirtual not material) you might be closer to a satisfactory formulation.
I’m having fun with this. I have not thought about morality this hard since high school. (long story)
*I hope none of you mind if I drift off into a fantasy about this particular moral system for a while.
Doesn’t this depend on how you see the scope? For instance, I support libertarianism, which promotes individual choice. A short-sighted view would be that I should want to be a dictator. However, my view is that, if there is a dictator, it is not likely to be me (and if it is, people will be trying to kill me); whereas, a more even distribution lessens the need for me to find ways to overpower other people in order to get choices for myself. Also, by proposing something that seems fair, I think it should be more likely that others will agree. Obviously, you can see how my knowledge and interpretations play into my theory as to what is the best way to maximize my opportunities to choose.
Does this address your comment? If not, could you give me a specific example that I can think about in this context?
The more I think about it, the more I think it may well be the equivalent of happiness. I think it would be interesting if you could think of some examples where this would not be the case so that we can explore them.
I agree, but reality may yet reflect choices made by him and, while alive, he may get “happiness” from contemplating the future changes in reality as a result of his choices.
I think this may actually be more related to the biology of the agent. My brain may not work in such a way that I am likely to think of my death as probable. Or it may be that it doesn’t work in such a way that I try to “see the future” in this way. For example, a mentally retarded agent will be very limited in some of the kinds of choices that are available for him to make, or that he is able to recognize as choices.
If I have understood this correctly, I would say that we are in agreement–or very close to it.
-VM
I’m not sure you have gone far enough. But this is certainly a step in the right direction. I’m not sure you can call a kangaroo a moral agent. I think its instictual actions remove choices more than not. I don’t have a study or anything to back this up, and I’d be willing to learn it was not true, but life alone is not enough for an entity to be a moral agent.
This also is a step in the right direction. I agree that moral agents want to make choices (what you say as “control reality”). But I think there is more. Moral agents, all moral agents, want to control reality for specific purposes.
Let me say that I am not convinced of this yet. I see the internal consistency of it though.
Possibly, just above or below preventing another from choosing.
I would go a bit farther. Cooperation has the ability to magnify the physical output of the participants. That is, cooperation can lead to a greater result ( a larger impact on reality) than any action alone.
Nope. You answered it just fine.
This is a good step. I don’t disagree with this at all. I’m still not certain that “making choices that affect reality” is the highest good absolutely. But I agree entirely that it needs to be at the top of any absolute morality.
Further, I appreciate the formulation of a morality judgement engine based on choice maximization in that it does not require the moral agents to be humans. This seems to generalize it more than my formula.
Or when my choices allow me to fit in the way I want. Yes, something like that.
Yes, when they make not choices. Obviously, they make the choice to give up the right to make choices, and to some extent they continue to choose that lifestyle each and every choice. But people have an astounding ability to experience happiness at almost any sort of stimulus. Kidnap victims identifying with their kidnappers, massochists enjoying pain, that sort of thing. The only thing I was saying is that I think tying morality to the emotion of happiness seems a bit transitory.
No, forgive me, I am dropping the context again. Yes. When person A literally says that person B lived a good life they mean that person B lived a life that person A judges to be good. That is from the framework of person A’s moral system, person B’s life was good. The only thing I was getting at with this statement was that the “life” in question was not considered as some homogeneous collection if independant actions, beliefs, or even outcomes. When we talk about a life this way, we talk about the life as a whole thing. A story perhaps, or a sequence. But something whole and different from a simple collection of events.
Well, this defines it in time, yes.
This is certainly one way to measure the result of a life. It does not get at the experience of that life, however. It only measures the external effects of it.
If you mean the “results”, then no. That is not exactly what I am talking about. It is part of it certainly. But I think it is not enough. The outward effects of a well lived life may not tell the whole story. I certainly would not want to say that the purpose of a whole life is to leave as many lasting effects as possible. Would you?
There are still differences certainly. However since we seem to have agreed on quite a few things over the course of this thread, I think we are coming closer together. Do you agree?
And someone who finds contentedness in inflicting pain and harm on nonconsenting entities and does so, do they have the same moral system as someone who finds contentedness in healing sick puppies?
Do these people have the same moral system? Do they share axioms in any meaningful way? One places causing pain very high; one places easing pain equivalently high.
If you’re not willing to use the English language as defined and used, you will have to teach me the language you prefer before we can continue. If you’re not willing to accept the basic premises of logic systems as they are taught, you will have to go back and teach the logic systems you prefer.
Given that I am not willing to go to that level of effort to argue on a message board, however, I think it would be a wiser expenditure of both of our resources to use standard English and standard logical procedures.
Ye gods and little fishies. I am sorry that you consider discussing a shared interest with my husband to be some sort of personal insult. Good grief, man.
But I still do not accept your axioms; they strike me as self-evidently wrong. The fact that you consider them self-evidently correct strikes me as evidence for my position. And again, we have axiom lock – I don’t accept yours, you don’t accept mine, thus we don’t get anywhere.
I don’t consider people who set cats on fire for fun to be operating from the same set of moral axioms as those people who spend their time volunteering in soup kitchens for fun, even if both claim to be motivated from an urge to personal satisfaction. You seem to be arguing that “personal satisfaction” means that they have, at root, the same absolute system, even if it means that they always make opposing choices.
Moral systems don’t make choices. People do, using tools that include moral systems. Moral choices are rooted in what particular actions mean to people, and meaning is a process that requires the subjective workings of a brain.
Look, this is where people get their moralities from, as far as I can tell:
They start out with the basic instinct-driven things, you can see this in babies, the rawest roots of protomorality: some form of “avoid pain, seek pleasure”. But even in babies you can find differences in preferences, differences in tolerances, pain tolerances, rejections of particular forms of food. There is no universal preference; the choices of infants reject foods, toys, activities, sounds, based on nothing other than the subjective preferences of the baby involved. Some are particularly prone to some sorts of behaviour that others never show; some have more needs in some areas than others; some just don’t have resources that others have.
Then you have the layers of upbringing, experience, developing psyche, all of which are influences on the chaotic system that’s developing to encompass choices that are more complicated than pain this next moment or pleasure this next moment. Delayed gratification and a sense of time, empathy, sense of self, comprehension of consequence, and a myriad other things appear over time, shaped by the raw personality, the developing brain, and the experiences that feed into the system.
At some point, this system of choice-making developments is deemed sufficient by surrounding society to make those choices without supervision from people with more complete systems – “the age of reason” is one formulation of this, as are age of consent laws, ages at which people can enter into legal contracts, and other recognitions of passage into adulthood. Typically, the moral system of someone at that age has modified significantly from the protomorality they started evolving their own system from, because that protomorality does not serve as a basis to make adequate choices – it is not sufficient to form a moral system of itself.
And at any point along that process, you can find someone who has made a different choice or evaluation of principle, either consciously or subliminally, to better suit their personal preferences or beliefs. Some of them are genuinely broken people making broken choices, but not every differing choice is defective. At the end result, you have moral systems that are assembled by individuals based on a refinement of their raw natures and preferences in the crucible of experience and culture.
These are the only absolutes I can see: that meaning is constructed by human brains, and that when the interpretation of a human brain is involved, subjectivity happens.
If you disagree with that, then we, again, have incompatible axioms, and there is nothing to productively argue about, so thrashing for another three pages will not be any more productive than the past three-and-change have been.
So…
Can I get a consensus definition of what an absolute, objective morality actually is? What sort of thing is it, exactly? To put it another way: how about a consensus on what it would mean for such a thing to “exist” (like the same way an apple exists?)
Most intriguingly: isn’t it more of a verb than a noun (in the sense that it isn’t a static thing out there somewhere, but rather an action that must always be tied to some particular moralist or moralists who are moralizing/judging/valuing/whatever?)
no, that is not what i’m trying to say. i mentioned the judgement of one agent’s morality from another agent’s morality because it struck me as a possible hole in my formulation. i did not mean to dwell on that overmuch. rather, i am trying to present a framework of morality that is both absolute and relative (if that’s possible).
we could use the word mvemjsnup for all i care. i don’t know if i want to get into the specifics of how i mean the word “happiness”, as it would certainly have to include myriad things that i think are beyond the scope of this thread. i’m only interested in saying that there is some individual quantity that people seek to maximize when making any choice, and that it is quantifiable. i may continue to call it “happiness” in this thread, for the sakes of convenience and consistency.
i’m not suggesting that we measure it. i’m suggesting instead that an individual making a choice has an idea of the degree to which he will be satisfied with the results of his choice.
if by this you mean that my formulation of an “absolute morality” (i must use the term loosely, now, i think) bases decisions on the degree to which an agent thinks he will be satisfied with them, you have again got my point.
if he chooses to disobey what he thinks his morality is for any reason, i think he could only do so because he thinks he would gain more satisfaction by doing that.
what i mean is in certain situations the “happiness” of one agent might require the “unhappiness” of another. suppose agent A is made happier by making agent B unhappy. then, the actions taken by A will make B unhappy, and as such, B will think A is acting in an immoral way. B’s morality is based on B’s “happiness”, after all. i’m still not sure what effect this might have on my formulation. perhaps Lilairen can shed some light on this for us (see my upcoming response to her).
each agent is equipped with different tools for evaluating his and others’ morality. as a result, each can judge the moral system itself as “good” while judging the actions of other agents as “bad”.
fair enough. i’ve seen more people deluded about the “happiness” they can gain from simply acquiring more money than i have seen people truly satisfied by monetary gains, so i chose to generalize in that fashion. if money truly satisfies a person, i concede that they have not erred in trying to acquire more of it.
why not? have you read my “propositions” that i think might lead toward an “absolute moral system”? i’m interested in your thoughts on them, especially because of this statement.
i will restate them, in what i hope to be a clearer manner, here:
proposition a: when making decisions, moral agents can not (an therefore do not) choose an option which will, on the whole, lead to an outcome they consider less satisfactory than another option (of which they had knowledge). that is, they will always choose the option that they think is best for them.
proposition b: there is a single quantity that an agent seeks to maximize when making a decision, and that quantity can be estimated by the agent.
to me, if person X makes choices that seem to always be in opposition to the choices person Y makes, it is not because of different moral axioms, but because of different tools for evaluating their decisions and because of different information each has regarding those decisions.
i’m still not convinced your formulation is the same as mine. what i mean is the degree to which one will be satisfied by choosing a certain option.
do you believe it is impossible for one to choose an option which gives him the most satisfaction but does not increase his choice-making capabilities? or, in your more recent turn, do you think it is impossible for a person to see the vast degree to which he has affected reality and be unsatisfied with those effects, even if he opted for the maximum at every chance?
also, i don’t think i’m clear on what you mean by increasing the ability to make more choice. i honestly have no idea why that would make a person more happy, but i expect you will elaborate on that.
Based on this post, I would say yes. It seems that most of the disagreements are primarily semantic, or a result of the fact that we’re still trying to work out the logic. It is not helped by the fact that you like to think in abstractions and I don’t. For me, I can’t really internalize them until I can start telling stories around them. This was true even when I was taking higher-level math classes. It is particularly troublesome when I try to understand things like relativity–I usually wind up with questions built on bizarre thought experiments involving flying carpets, digital clocks, and slingshots. It is my way of doing, “If this is so, then what if that?” And I am not good at positing “generic” implications of abstract notions. If you say, “What if a moral agent were to X?” I do this mental thing where I start imagining various things that might be moral agents actually doing, or attempting to do, X.
This, I don’t think I agree with. I would say that anything that can choose to go one way or another, as compared to a rock, is a moral agent. Even so, the moral agent cannot use tools that it does not have, and the kangaroo’s brain doesn’t do much more than apply instinctive behaviors, much like a very simple computer or a very complex machine. The result of this would be that it doesn’t have the tools to pursue the goal in any way other than to purely pursue survival. I guess that, in that way, it can’t do much more choosing than the rock can. But it does learn to a limited extent. Then again, I have spent quite a few fruitless hours trying to come up with some notion that makes the workings of my brain no more than predictable (if unbelievably complex) chemical reactions. Does the fact that the chemical reactions taking place in neurons creates in me the impression of free will mean that it is so? In other words, if the kangaroo is not a moral agent, I’m not convinced that either one of us is.
We’re still stumbling around the definition of life. To me, life in the sense that we are talking about would almost have to be the ability to take purposeful action, i.e. make choices that are meant to alter the world, or “reality”. A single-celled organism meets this defintion, but it really only has the capacity to make one kind of moral decision: Pursue continued survival. It has no tools whatsoever to predict or contemplate, yet it takes actions intended to keep it in a “live” state that allows further purposeful action. I’m not sure what to think about the occasion when it splits into two organisms. Look at it one way, and it has just doubled its opportunites. Look at it another, and it has just created competition. It is almost as if the organsim is so determined to change the world, it will create competition for itself. I don’t know, maybe there is a higher absolute? I want to affect the world, but will risk giving that up in my insistence that the world be affected by SOMETHING. Or, maybe I take credit for changes wrought by another moral agent that I have caused to exist. Not sure what the implications of that might be.
Right, but it seems to be a somewhat circular purpose. I want to change the world in ways that allow me to do even more changing of the world or to continue changing it.
I agree with this, and I think it may be the reason that the most successful moral agents would be those that learn to resolve disputes by sharing a “fairness” rule. Which would mean, I think, that there is some sort of built-in recognition that if you cause the exact same change that I would have chosen, it is as good (or almost as good) as if I had caused it myself.
Just remember the qualifier: Making choices that affect reality in such a way that even more opportunities to choose become available.
This tendency on my part is probably a result of my impression that differences between us and kangaroos is a question of degree (we’ve got a far more flexible and effective brain than they) more than a question of “here are two things that are completely different”.
Also, I think that this drive to change reality in a way that produces more opportunities is something that reality sort of imposes. If a potential moral agent were to happen along that did not strive in this way, it would not be a moral agent for long and, since it did not strive to change things, there would be little evidence of its passing.
I would say that this would be a case where they have already made choices that are working out in a way they prefer. If I have made choices that have now put me in a position to change TV channels indefinitely (and my brain doesn’t function in such a way to see this as limiting or lacking in ambition), then perhaps I have reached a place where I have enough opportunities to make choices (and no knowledge of threats to this situation continuing) and I can relax and choose away. If that makes any sense. It may need further thought.
I would suggest that these are examples of working with the brains that we have. We learn to recognize “compassionate” behavior as being in synch with our goals. Stockholm syndrome usually sets in after the victims come to think of the imprisonment as just being “reality” and since the kidnappers are feeding them and caring for them, these evidences of apparent concern start triggering the brain processes associated with identifying friends.
As for masochists, I think there is a certain amount of “broken brain” going on. Usually pain is a signal from the “organism” that something is threatening survival while pleasure is a signal that something is good for it. When pain signals are misinterpreted as pleasure, I can’t help but think that it is just an example of the wires being crossed and the moral agent operating on uniquely bad data.
Well, each experience of the various kinds of happiness is definitely transitory, but the pursuit of more of them is ended only by ending the pursuer.
I don’t think there’s any disagreement here. I am thinking of it as me reflecting on the ways in which you have altered reality that resemble the way in which I would prefer to have it be altered–those cases where it is as if I had done the choosing.
I’m not really sure what is an external effect and what is an internal effect in this context. Is the release of endorphins, which my brain is wired to interpret as “good for the organism”, an internal effect or an external one? If a change only affects me (or “my” organism), is that not a change to reality? Where exactly is “I”, and what things are internal to “me” and what things are an “external” reality? I sometimes wonder if we will ever find a set of neurons (or one neuron) that contains the notion of “I”, or if kangaroos have such a thing. And what would happen if it were removed. Along these same lines, while I am thinking of myself as a whole, I could also be thinking of myself as a collection of very rudimentary one-celled moral agents that have happened across an astonishing way to cooperate in their efforts to keep making purposeful changes to the world. Would that make cortical neurons some sort of localized dictators? I’m definitely getting a little dizzy with this.
The way you’ve asked this, I am wondering if we are thinking of “lasting” and “effects” in the same way. Is there any purpose I could pursue that is not some sort of “effect”? In the same way, when you say lasting I think you are mainly considering time, while I am thinking of it more in terms of the number of opportunities for choice by me or other moral agents that are impacted, caused, or eliminated by my choices, whether these dependent choices happen during my “life” or not. Does that make any sense?
In terms of the implications of all this, I definitely do not see this as leading to some sort of sociopathic society of humans. We are pretty complicated moral agents with quite an array of tools and approaches at our disposal. The way I work the math, trying to bend other moral agents to our will is probably self-destructive in the long-term. And based on the way that we can work together to “create” options that did not exist, it makes sense that a higher-order or more successful moral agent would be one that can recognize the benefits to itself of cooperation and compromise with ostensible competitors.
-VM
But these people are aimed at manifesting different goals.
Simplistically: Even if persons X and Y have the same tools of evaluation and the same information, and both are aimed towards what they find the most satisfying result and consider that the greatest good, if person X uses those tools to determine what will cause the most pain, and person Y uses those tools to determine what will prevent the most pain, they are using different systems. If they were using the same system, one would not be likely to consider the other’s system morally inferior.
In reality, the problem is more nuanced (as there are very few people who always choose the option that will cause the most pain). But people differ in how self-focused they focus their morality across a spectrum to global focus; they differ in the timescale they use to evaluate their success; they differ in the traits and outcomes they value; they differ in their awareness of consequences of their choices; they differ in their basic beliefs about how the world works; a whole host of differences make the question nuanced.
Even if there is something that every moral agent is trying to maximise (which I consider too simplistic, but will grant for the purpose of argument), that thing will differ from agent to agent. It’s at best a meta-morality trait; “the agent will choose whichever of the options is the most satisfying to them” is a prediction about the nature of the choice made, but it provides no information about the system used to evaluate that satisfaction – the actual mechanism of morality.
I’m definitely not saying I think moral agents are psychic. I would say more the degree to which one believes he will be satisfied by choosing a certain option, based on the information and tools for evaluation he has at his disposal.
At this moment, I am thinking that, if you examine how the moral agent is measuring “satisfaction”, that it will in some way equate (or seem to him to equate) with increasing his choice-making options (if not necessarily his capabilities or facilities with which to do so). The grossest simplification I can think of right this second would be, if I don’t kill myself today, tomorrow I can decide which football game to watch. If I do, tomorrow no more choices exist. I was thinking about this in terms of the sense of contentment that suicides often experience once they have made their choice. Usually, they feel that they are “hemmed in” by circumstance and that life is providing them no meaningful choices. Once they’ve decided to take “None of the above”, suddenly they feel somewhat empowered.
Absolutely, if he chose poorly. I thought the TV I bought would have 5,000 channels; turns out, it doesn’t work at all. In other words, I have changed reality but in doing so, created a result that limited rather than expanded my subsequent choice making “power”.
Well, if I have a dollar, I can shop at the little store; if I have $10,000, I can shop at damn near any store and buy lots of things. If I make myself attractive, I (theoretically) have a chance at sex with more people. If I have a lot of friends, I can influence more people to make choices I approve of.
Also, think about the kinds of things that you buy. When considering the purchase, are you not imagining the uses to which you can put whatever it is? If it is a food processor, do you not have more ways of mangling a vegetable? If I eat healthy, am I not imagining myself with a body that can do more and for more years?
Even further, if I raise a child, am I not “creating” a moral agent and training it to make choices that resemble mine?
When talking about humans anyway, all of the choices I can think of seem to be based on personal notions of “satisfaction” that can be traced back to “additional choosing options” or “additional impacts on reality”.
Or, that is about as deeply as I am able to think about it right now.
-VM
I’m not so sure. Given all the nuances you described, I find it pretty easy imagine that a “base” goal subject to all the vagaries you discussed, plus the layering of “subsidiary” choosing systems below it, can easily expand into a wild assortment of variations. Also, there is the uniqueness caused by contention. In a bare-bones universe consisting of you, me, and a potato. We could each be using the exact same moral framework to achieve the exact same goal (I want to eat the potato), and our “uniqueness” causes us to be diametrically opposed and, in fact, on a quest for ways to kill each other.
-VM
I have never said that they do. You are the one arguing that there is no way to know absolutely whether one is practicing a more appropriate (for humand life) morality than the other. I’m the one saying that there is a way to tell the difference without first adopting one of these moralities.
Good. That is all I was saying. Can we drop the odd notion that nothing can be proven because axioms cannot? Can we agree that we are using standard definitions for most words and standard uses of most phrases? Can we let those be our axioms?
What are you talking about. I was refering to the axiom lock nonsense you were spouting.
Well, yes, but we have accepted standard english definitions for words and phrases. Show me what you mean by self evidently wrong. You have not, for instance proven that any of the common goals I proposed in my last post to you are not in fact common goals amongst humans. Demonstrate to me that humans do not all have to breath and I will grant you the point that no goal can be said to be present in all people.
The only axiom you and I seem to be arguing about is yours which says that no goal can be said to be present in all humans. I think I addressed it pretty clearly in my last post. Your arguments that some people enjoy killing puppies did not address it at all. If there is axiom lock, it is because you are refusing to address the subject.
Yes, but your position in this thread is that these two moral systems cannot be non arbitrarily differentiated. You may feel that these two moral systems are not equal, but this is an arbitrary decision on your part. Why should I agree with you? Moreover, if I have not chosen a moral system yet, how do I go about deciding which one to pick?
I am arguing no such thing. I have never said anythign which is remotely like this. Please stop putting words in my mouth and then accusing me of making straw man arguments.
Yes. I agree entirely. However, all brains capable of this process have characteristics in common. They all have certain goals in common. The ability to carry out this process is one such common characteristic even if there are no others. All I am saying, again, is that some of these common characteristics may provide a framework to evaluate moral systems which is
- not a moral system
- unique, objective, and absolute
- and knowable.
Seriously. I am not profering any absolute moral system. I am not even arguing that such an absolute moral system exists. I am only arguing that a framework exists which does not violate relativism qua relativism but which does violate moral relativism.
You just contradicted yourself. Why is it that the particular food two baby prefers seems to you to disprove that the two babies have a common goal of eating? I really don’t understand this. The fact that specifics differ does not mean that the two babies are not operating under the same principle.
But all you show here is that people are very different. I agree entirely with this. I have never in my life said anything to contradict it. The only thing I am saying is that people also have a great many things in common. Truly that is the gist of our disagreement. Are you seriously arguing that no one has anything in common? Or are you even arguing that there is no need, goal, or preference which is common to all people? Really? Remember about breathng?
I agree with this as well.
Of course not. I have never said this either. The only thing I have said is that some of them are defective. Moral relativism says you cannot judge such a thing except from within a morality. You cannot call some of the choices defective except in an entirely arbitrary way. You are still defending this yes?
But this is because you have not looked deep enough. Seriously. Please tell me again how now need or goal is common to all humans. I understand that people seek to satisfy thier needs in different flavors. I understand that people choose to state their goals in different languages. I understand also that every human has a unique set of needs and goals taken as a whole. But I seriously do not understand the idea that no goal is common to all people. I cannot phathom how you can cling to the idea that no need is common to all people. I have offered many examples of such goals in my previous posts. I do not think you have addressed any of them. Would you care to or do you prefer to simply state that I am wrong?
I for one have garnerned a great profit from these pages. Even with your passive agressive attitude I have found a great deal of profit. I have at times exceeded my purpose in this thread. Your posts in particular have brought me back to it. If you want, however, I will grant you the last word between us on this subject. Claim it and it is yours.
But to reiterate, this is all we are looking for. Any such “meta-morality” common to all people disproves moral relativism.
Well, it does provide some information. But I agree that it is not satisfactory. I have been questioning the idea that “happiness maximization” is a good framework all along.
I don’t know if we have had a vote, but the characteristics of moral systems have been laid out a couple times. I have offered the definition “Morality is a complex of concepts and philosophical beliefs by which an individual determines whether his or her actions are right or wrong.” erislover has offered a more rigourous definition in post #110. If you are asking whether or not anyone in here thinks that an absolute morality has any existence outside of moral agents, then I think I can be pretty confident in saying that the consensus is no.
Ideas are nouns. Do they have no existence? Certainly no existence outside of the mind they reside in. Ideally a concept is always predicated on some percieved reality. But given the abstractions can be abstractions of perceptions or of other abstractions, it seems silly to ascribe the same sort of existence to high level abstractions as we do to apples. Apples would still have exactly the same form and characteristics (leaving aside breeding programs) if there were no humans ever. The concept of morality itself would not exist without people, on the other hand (again, leaving out aliens or futuristic robots).
Does that answer your question?