Yes, but you are forgetting the context. That person also has the goal of not being a person anymore. What we are looking for is a framework from which to judge moral systems as appropriate or not for living a life on earth. I submit that drowning oneself* is an example of something not compatible with that.
Finally, the person still needs to breath until he stops doing so. He needs to breath until he gets to the water. He needs to breath while writing out his suicide note. I do not think that you have shown that a person who want to drown themselves is either a useful example nor that even he does not need to breath.
Yes, but in order to do so you have to assume that all situations are equivilant. You have to assume that drowning oneself is just as appropriate a goal as not. You can do this if you say that all goals are equivilant, or mor relativistically that no goal systems can be privileged above others. But you cannot do this if you simply restrict your set of moral agents to humans living on earth.
I agree. If you expand the question to trying to justify all possible moralities then it is not possible. I also think, however, that it is not necessary.
*Just for the record, I am not claiming here that suicide is always wrong. I can and have formed situations where it is possible to morally (based on principles higher than mere physical survival) possible to desire suicide. I’m just saying that suicide as an ultimate goal in and of itself is not an appropriate moral system for people living on earth.
You won’t get any major objection from me. However, I think taking this route lowers the chances that you’re going to get anyone else on board with you.
Before any of the other stuff, I want to make a point using the labyrinth metaphor:
I think you’re having trouble seeing my logic because these various metaphors tend to drop a key part of the morality I was suggesting, the “effect” on reality. The agents aren’t merely looking for a multiplicity of choices; they are specifically looking for choices that let them change the world in some way. Your question about the labyrinth, I think, provides an opportunity to try to clarify this.
If a person is “trapped” in a labyrinth, there are a lot of things they can’t do until they find their way out (see their kids, talk to their friends, have a nice meal, sleep in their own bed, etc.). So, if it is a person and they are really thinking about the future, there really is a qualitative difference, as you implied, other than just the number of choices that one gets. So, in answer to your question, I would say that the person might indeed take the path that had fewer immediate choices because he thought that this direction was more likely to take him to the exit, and that the exit is the key to opening up the kind of reality changing choices that the person is interested in.
The key is that the agent is looking for ways to change or control the reality that they experience. And I don’t think we can overlook how much of the reality that an agent experiences is due to the workings of the agent’s body. And I think there are differences between agents in their ability to think clearly about the world outside of the signals from their bodies. I mentioned before that our bodies use messages of pleasure and pain to keep us apprised of how well we are taking care of them. With reference to the bondage person–hopefully, the last one–I think that this person is taking action to cause the person’s brain to fire off “pleasure” signals. If nothing else, they are affecting reality in a way that causes their bodies to indicate “things are just fine for me.” And as long as your body is doing fine, survival seems to be no immediate issue, so you are taking at least that step to keep the reality-changing options coming.
Also, I want to point out that each agent is limited by the “abilities” of their brains. I think it is the case that there are many agents whose brains are not capable of or have not been trained to “look” very far in the future or predict complicated changes resulting from their choices. To this extent, I think of the agent as someone playing chess and trying to think several moves ahead. Some are quite good at it, others not so much.
I’ll make a few comments about the other stuff, but I think the most important is to see if this helps clarify how I’m thinking about this and if it causes any lights to go off in your head.
I think the key issues here are the person’s “tastes”–what their brains, for whatever reason, find “interesting” to contemplate–and the person’s understanding of the impacts of changes. A person choosing a science discipline is likely to be looking to make “discoveries” which, depending on their signifcance, could impact the lives of every person on the planet.
Also, the person is working against the clock and against limited resources. If I spend all my life studying, there isn’t any time left to use the knowledge and “change the world”. Also, this is not the ONLY way that I will be changing the world. There is probly a desire to start a family or create some other sort of “lifestyle”. Money available to attend college is limited, and a lot of other lifestyle selections are not really possible until school is out of the way and a salary is being earned.
I think keeping a focus on the agent’s ideas of what choices impact reality and in what way is the key. Sort of goes back to the thing about wanting to be a kangaroo. Almost no one is liable to pursue a morality based on this because they can’t imagine a time when they would actually get to choose to change themselves into a kangaroo–in other words, they would see it as spending a lot of time devoted to opening up a path that is not openable.
But it also allows for less time to take advantage of the new set of choices that become available. Also, if you know a woman who you think represents an excellent “choice” for helping you get to the “next” set of goals, there is probably the concern that someone else snatches her up and the resulting difficulty in procuring another, equally good choice.
I’ll just add that filling the world with random kids kind of limits your impact to the effects that your DNA has. On the other hand, raising a kid and teaching that kid to have similar “values” can have an effect on reality that more closely matches your desires and is less likely to misfire in the outcome.
I agree with the logic here. The agent is going to choose based on what he knows about the available options.
And I think that they have settled on a particular strategy for affecting reality, and the cul de sac suits that strategy. They may be unimaginative and can’t really think about changes other than those that give their bodies pleasure and so may focus only on changing reality in such a way as to impact their bodies.
I’m not sure of that. The definition you proposed for “living a life on earth” was very broad. Consider a person who is drowning themselves because it is a necessary condition for saving the life of someone else. Earlier you said:
What I am trying to point out is that the appropriateness of the action/goal is very dependent on the circumstances and the person in question. And that even among “humans living a life on earth” the possible goal/person combinations are effectively infinite, in large part because the elements that you have identified as part of “living a life on earth” are so subjective.
Prior to the moral choice, he probably needs to breath, yes. After the moral choice, he definitely does not need to breath. In fact, he needs to stop breathing. I’m trying to point out that breathing is not necessarily a goal of a moral agent. It may support the goals of a moral agent, or it may hinder them, depending on what those goals are. Breathing is necessary for personal survival, for instance, but if a moral agent’s goals at the moment do not include personal survival, then breathing is a hindrance, not a goal.
Based on your previous definition of “living a life on earth”, it seems to me that you can. Your definition is strongly based in subjective measures such as the “whole life” of the agent. Much of the last page of posts has been about the satisfaction/happiness/contentedness of the moral agent, all of which are subjective judgements. In fact, the only objective element of “living a life on earth” that I can see in your definition is the length of biological existence, and that can be overridden, as your last paragraph of the definition shows. With that much subjectivity, it only requires that someone believe that a given goal is in support of “living a life on earth” to qualify.
This should be obvious, but as soon as you rely on a subjective judgement as part of your measurement, you’re not creating an absolute anything.
I actually have less issues with the use of pure survival/longevity as an absolute measure than the “living a life on earth” standard that you’ve proposed. I still don’t believe that it can truly be the basis of an absolute morality ranking, because survival is not always the best criterion for such analysis, but at least it is an objective measure.
I’m not sure it is my definition that created the infiniteness of the possibilities, but I agree that the permutations are in fact pretty infinite.
Again, not at all. He makes the moral choice in his room while pondering his life. He needs to continue breathing in order to carry out his goal of drowning himslelf.
I agree, but only if you postulate very odd moral agents.
You see, this is where we disagree. Perhaps this difficulty is due to the fact that we are only discussing one need (breathing). But it seems to me that the goal of suicide can only be justified by a conclusion based on other more important values. Specifically I’m thinking of freedom or perhaps the survival of one’s children. And in order to have goals like this, breathing would also have to be included in the moral system. For such an agent, breathing has simply become less of a value than some other value. This does not mean that breathing has become a non or negative value. Its just one he is willing to give up in order to achieve something else.
What I’m saying is that while I see that suicide could be a goal of a moral agent, I do not believe that it could be higher than all others. And that unless you postulate a moral agent which holds that suicide is the highest of moral principles, that moral agent will share the need to breath with all other such moral agents.
Again, I agree that we can postulate a moral agent which does not need to breath at all. I just don’t think such an agent is very realistic. Or even possible.
I agree that this is confusing. What exactly makes up the “whole life” of a particular agent is somewhat subjective. I have said all along that I am not very interested in discussing whether or not people should wear hats. That seems to me to be pretty clearly a subjective judgement. However, there are some pretty axiomatic characteristics of a moral agent’s “whole life”. It is a life, for instance. It consists of more than disconected or unrelated experiences, just as an example.
Quite. A point I have made also, though less well than you have here.
No, there is also the fact that the life is the life of the agent in question. It belongs to him, if you will allow the poor allusion to other moral principles. Have you ever seen the movie “Whose Life is it Anyway?” My point is that the biological length of a life is not the only measure of it. There is also the question of who gets to live it, and who gets to decide how it is lived. Perhaps, even, who gets to decide how it is ended. These are all also objective measures (though in some cases harder to quantify) of how a life was lived (or even how a life might be lived).
Again, my purpose here is not to debate whether or not goal number 3,486 should be this or that. My only purpose is to argue against the relativist claim that no system of evaluating moral systems may be privileged above any other. To that end, I am only concentrating on the highest few goals in any prospective moral system. I am not trying to say that any particular goal cannot be included in a moral system that would be ranked near the top by such an evalutation tool. I am only saying that the goal of suicide would need to depend on several other values being higher than the mere biological length of the life. The fact (IMHO) that such values would have to be placed higher means to me that we cannot construct a moral system based on the principle of suicide as the highest moral good. This to me means that we can evaluate moral systems based on whether or not they do this. If we can do so objectively, based on a unique and absolute framework, then the claims of moral relativists are broken.
Well, I’m not sure we can go this far. If an absolute system suggests that the individual has absolute choice over his actions as long as he does not interfere with other’s choices, then we have formed an absolute system which includes quite a bit of subjectivity.
I agree entirely if you mean that we cannot base an absolute system on subjectivity. I think erislover’s principle of non ambiguity was intended to cover this. At least I think so. If your still reading, erislover could you confirm this?
Can you point out to me the part of my definiton of a “whole life” which is not objective or absolute? I would very much appreciate it. As I said, I am not entirely satisfied with the definition I constructed.
I think I concentrated too much on expressing the idea of the life as a whole (meaning more than a set of disconnected experiences) and not enough on the idea that the life “belongs” to the liver.
I would very much appreciate any other thoughts you had on that subject.
[Julia Childs voice] Save the liver![/Julia Childs voice]
Yes, I think you are correct. Because this is exactly what I have been trying to express as an objection to the use of choice as the highest link in the conceptual chain. The fact that there are qualities of choices, or qualities of changes which effect whether or not they are prefered over other choices.
Again, I agree that being able to choose is clearly an inherent trait of any moral agent. I agree that this fact can be used as an absolute framework from which to evaluate moral systems. The only point I am disagreeing on is the idea that freedom of choice has to be the highest of the highs. And the reason for my disagreement is that some choices are preferable to others based on other inherent characteristics of moral agents.
Unless I am mistaken, you have proposed an absolute morality based on (as the highest of highs) the principle that a moral agent will seek out the most changes to the world he can control. Is that a fair restatement?
<blink> What if he chooses to drown himself and then sticks his head in a bucket? Or immediately jumps into the lake with weights. No continued breathing required.
Odd by who’s definition? I don’t find suicidal people odd at all. Unfortunate, sometimes pitiable, sometimes enraging, but not odd.
And what I’m saying is that I do not see a moral agent who holds that suicide is the highest of moral purposes to be as strange as you do. Unlikely, perhaps, humans being what they are. But hardly inconceivable.
Part of the difficulty here between us may be that I see moral systems as constantly adapting and changing things. A moral agent adopts the moral system that best fits the agent’s goals at the time. If a moral agent, through contemplation or whatever means, decides that suicide is a moral good, their moral system has changed to reflect that. Prior to the decision, breathing may have been a part of their system, but afterwards it is not.
Also, I have been talking about moral goals, not necessarily the needs required to fulfill those goals. I do not see breathing as a moral goal in most cases. It could be adopted as such, I suppose.
While I disagree that “we cannot construct a moral system based on the principle of suicide as the highest moral good”, I agree completely that you could construct a system that evaluates moral systems based on whether they do that or not. The problem then becomes one of applicability.
This is where I think you are going wrong. Any framework you construct will either not be absolute, or the measurement will apply in some instances but not in others.
The problem of applicability is, I think, why relativism holds that you can’t favor one system of measurement over another.
Either that is not an absolute system, because the evaluation of “does not interfere with other’s choices” cannot be measured objectively, or there is no subjectivity to the system. You cannot have an absolute system without a final, objective measure that can be applied in all circumstances.
I agree that the principle of non-ambiguity covers this point.
Um, basically all of it, as far as I can tell. As I said, the only objective item I can see in your definition is the idea of survival/longevity. If you further define your term of biological life to include such items as who struck the killing blow, if any, or who made the decisions during a certain time period, I could perhaps grant them as objective. But even these are problematic. How do you judge a situation where Person 1 willingly lets Person 2 kill them with a weapon? Who actually killed Person 1? What about a situation where Person 1 is forced to obey the decisions of Person 2 50% of the time? The problem with objectivity is that it requires you to be very precise, and human decisions, which is what moral systems are all about, do not fit themselves to such precision very well at all.
Ideas such as “who gets to live the life” are subjective. How do you live someone else’s life? There is no ability to possess someone else’s body, and even if there were, would you be living their life, or your own in different circumstances? Someone might say that a controlling mother who doesn’t allow a child to make any decisions is living the child’s life, but I would say that the child is living its own life, just under very constrained circumstances. The very fact that we can come to different conclusions about such a situation demonstrates the subjectiveness.
Look at this part of your definition:
How is this evaluatable except subjectively? How can you judge whether someone is “living each moment of [their] life as if [their] whole life was an end in itself?”
If you bring contentedness or happiness or fulfillment into the definition, it only gets worse.
He still has to breath in order to move to the bucket. Even if not, he has to breath in order to have enough nueral energy to make the moral decision to kill himself.
What do you mean “who’s definition?” Please let’s not begin some sort of relativist dictionary. My quick search turne up two definitions of “odd” which I think apply. “beyond or deviating from the usual or expected; or not easily explained” I think the first is the way I meant it. In that I thik people who commit suicide are beyond or deviating from the usual or expected human behavior. I think more stongly that anyone attempting to live based on a morality completely derived from some sort of principle claiming suicide is the best thing a person can do are so “odd” as to be self contradictory.
Yes, this is the crux of our disagreement. Allow me to examine your definiton of morality.
But if this is the case, then they are not the same moral system are they?
But how is a moral agent supposed to do this? Randomly? There is no reason (according to relativism) to beleive that any framework is superior to any other when evaluating moral systems. How is a person to choose not only his first morality, but each and every new one you propose he adopts?
No, this is an essential disagreement we have over the definition of moral systems. Please show me any definition which backs up your opinion.
And this, even given your definition of morality, seems unnecessarily restrictive. Why is it impossible for a person to give up one thing for something he values more. In such an instance, he does not stop valuing the thing given up, he simply trades it for something with a higher value.
I think you are postulating that moral agents are choice making things which jump from morality to morality randomly. How would you tell the difference between such a moral agent and something which was not acting on any morality at all?
Correct. I have not been talking about the need to breath as a moral goal either. I have only been talking about it as a common need that all humans have. And by extension all moral agents.
When I said “…we cannot construct a moral system based on the principle of suicide as the highest moral good…” I meant in the context of moral systems for humans on earth. I also said “I agree that we can postulate a moral agent which does not need to breath at all.”
In terms of a moral system which places the suicide of the moral agent as the highest goal, and which derives all other goals from that, I agree. However, I think it is the relativists who must describe how such a system is possible under the definition of moralities and moral agents. You guys (moral relativists) are the ones saying that no system of evaluating moral systems is preferable.
This to me means that a moral system based on suicide cannot be preffered over one with suicide much farther down the moral hierarchy.
I have said, and I will say again, for moral agents which have no characteristics at all I can understand this. But as soon as we realize that people are the moral agents in question, that statement makes no sense.
No, I dont’ think so. I appreciate you attempt to prove this though.
I’m not sure this is the justification for relativism. I doubt very much that moral relativism holds that moral systems cannot be judged (except from within other moral systems) because some agents exist that might need any particular moral system. I think rather that Lilairen was right that relativist simply are recognizing that no such absolute framework exists.
I’m sorry, I do not understand this assertion.
The subjectivity comes in at a lower level of the moral hierarchy.
Yes, but only when those measures are included in the scope of the absolute system. Recall erislover’s criteria in post #153. A moral system is allowed to declare some things as not within its perview. It is allowed to remain completely nuetral.
I disagree with this. Objective means simply “*undistorted by emotion or personal bias; based on observable phenomena *”. It does not mean that I have to be aware of the exact position of each and every molecule of gas in order to speak objectively about the gas. In our context, objectivity does not demand that I know precisely how each and every human will behave in order to speak about common characteristics of humans.
[QUOTE]
Ideas such as “who gets to live the life” are subjective.
[QUOTE]
The idea may be, put the phenomena is not. Subjective means “*taking place within the mind and modified by individual bias; or of a mental act performed entirely within the mind *”. I agree that each persons personal concpetion of an idea may exist in that person’s mind. That does not mean, however, that no evidence can be collected as to the applicability of that idea to a certain event.
By removing from them the oportunity to live it. You cannot live someone else’s life in the sense of demonic possesion. But you can remove from them all choices and all possiblitiy to live their life the way they want.
No, it does not. Let’s not go too far in making up definitions or substituting our own opinions for them. I thought we agreed a while ago to use standard english definitions of words.
The fact that we disagree as to a particular conclusion means only that we disagree. It is entirely possible that I am wrong.
You can look at the pattern of behavior. One can observe the actions of a particular moral agent and decide if he was making choices based only on transitory states or if he seemed to be considering longer term consequences and effects. I agree it is not easily quantifiable. But that is not the same thing as being completely subjective either.
Yes. Note that I have not done this. I am trying to be as simply as possible. I recognize that the concepts we are discussing are very difficult to grasp. I am barely holding on to my formulation, and I recognize that I may have made a mistake somewhere.
so i have realized my error, and this statement makes me think i have realized yours.
the system i suggested turns out not to be a “moral system” at all. it turned out to be my formulation for how moral agents come by their moral systems. that realization lends me to believe that morality itself is more or less an illusion, like free will. it certainly does not provide a way to privilege any system, except on a personal level (i.e. we can privilege systems for ourselves–which i, by the way, advocate).
we can not help but judge people from our own perspective, which can not help but be different from anyone else’s. these, to me, seem to be simple and self-evident facts about reality. given them, and given my previous “propositions” (which i hope to clarify some more, here), i believe what we judge as “good” is what we believe will be, on the whole, “good” for us, based on what we believe to be true about the world.
my “propositions”, resvamped:
def’n - satisfaction: a multivariate utility function, taking into account, to varying degrees, such things as duration of happiness caused, depth and scope of happiness caused, non-emotional physical pleasure, temporally variable needs, etc. this quantity ought to be physical, such that if we could pin down a chemical or mechanical method by which to increase it, we could do it artificially.
i’m not sure we can define this term completely without an entire thread dedicated to it. suffice it to say i believe that each of our actions in life are either arbitrary or have the goal of maximizing the function i allude to with the word “satisfaction”.
def’n - prefer: given a set of options, one option is said to be “preferred” when an agent believes it will lead to a higher level of “satisfaction”.
proposition: when making decisions, moral agents can not (an therefore do not) choose an option that is preferred less than another available option.
i’ve eliminated proposition (b) in favor of providing a definition of “prefer” and “satisfaction”. i’m not satisfied with my definition of “satisfaction”, but i am convinced that there is some function we seek to maximize when making a choice, and that given the same circumstances (down to the physical configuration of our brains and the surrounding local universe at the time the decision was made), we could not help but make the same decision every time. i believe this function is describable and more or less non-chaotic. i would like to hear others’ views on this opinion.
so, every human is inherently selfish, and can’t help but be that way. thus, there is no “moral system”, no way of judging whether another’s actions are “good” or “bad” or “right” or “wrong”. all that we have is equipment to say whether what someone else prefers is what we prefer. we infer others’ preferences by their actions, so to each of us, another’s actions are “right” if they are what we would prefer the agent do, and “wrong” otherwise.
this does not make claims about what should happen, other than on a personal level. so it is descriptive, rather than judgemental.
the problem with finding an absolute, objective, and unambiguous method for using “a complex of concepts and philosophical beliefs” to determine whether our or another’s “actions are right or wrong” is that we all have our own definitions of “right” and “wrong” and, moreover, those definitions are not static even within an individual, but change over time.
Again, I point to your later conversation with Teine, and your choice of “survival” as an objective rubric, and your argument (apparent) that suicide is an axiomatically denigrated choice. Do you believe that you share those axioms with Teine, who has stated explicitly that he disagrees with them?
I’m not sure I agree with this one either, but I’ve been watching you guys argue about kangaroos. It’s not terribly relevant to anything I’m saying.
It’s this “no moral framework can be privileged above others” that’s the sticky point for me. It’s an absolute-moral-toned statement; I look at it and say, “In what context?”
I will happily declare my moral system better than yours for running my life. I expect that you will happily declare my moral system worse than yours for running your life. That’s context. Huge industrial whaling operations leading to the near-extinction of species are operating in a different context than Arctic fishing villages that kill a whale or two in a year and build their entire civilisation off the entire corpse, not discarding any of it. Context. Ham and cheese sandwiches are deprecated in certain contexts – some of the ones I can think of medical, some of them religious – and not others.
I consider myself obligated to consider the context; what frustration that I have had in this thread has largely been because you keep eliding this thing that I consider essential. Without the context, I do not have enough information to make a moral judgement, because I don’t have possession of some sort of absolute standard, something that does not depend on context. If such a thing exists, which I doubt due to lack of evidence, I do not believe it knowable.
So no system can be “privileged” in a vacuum – all of them need a context. My life is the only context I need for my moral system to function for me. If I were to try to build a system that included you, I would have to have all your context; if I were to build a system to encompass people as a whole, I would need to have all of everyone’s context, all of its conflicting complexity. I suspect I would need true omniscience, which I don’t believe exists.
I made assertions regarding morals in general, where they come from, while talking to you; I did not characterise them beyond general terms that I would also agree apply to myself and, for that matter, any person I happen to meet.
I have tried to be very careful to specify when I’m positing my understanding of what you are saying to state it as my understanding (with words like “apparently”) in an attempt to reduce the possibility that you would accuse me of trying to accuse you of things; you gave me the impression of being touchy on that front.
Please read what I’m saying in that light; I’m sorry that I have been insufficiently clear.
And I believe that you chose which of those concepts to adopt based on the system you evolved from your protomoral instincts, personality, experience, and culture. If the system you have at the moment judges some other system to be superior, is it not logical to adopt that superior system?
A universal morality would have to apply to all humans; a formulation of that morality would have to be practiced by a human or set thereof (or at least believed by humans who also believed they were failing to live up to it), or only exist in some sort of hypothetical ethical thought experiment that doesn’t actually directly touch reality. And there’s no way of knowing who (if anyone) is right without already knowing what the system is, but there’s no way of knowing what the system is without knowing which (if any) human formulation contains the rightness. (Travel to the land of thought experiments that have no manifestation in reality strikes me as currently impossible, though many people try to get there, often with the aid of mushrooms.)
As I said before, I’m agnostic-inclined-disbelieving on whether or not an absolute exists, but if it does, I can’t see a way of escaping that circularity.
My morality is based on the evidence of my life and its internal logic, the entire concatenation of my existence so far. You do not have access to that evidence or logic; you do not have my experiences, the core nontransmittable subjective knowledge, of my life. Thus, from your reference frame, it would be entirely reasonable to presume that my morality is based on my individual discretion, preference, or caprice. You might be able to have access to the logic, if we talked about it, or some of the experiences if I could express them adequately, but I bet we’d bog down both in my inability to express how experiences lead to conclusions within the system and occasional cases of axiom lock. (The one I can see most directly is that you certainly appear to value life qua life much more highly than I do.)
Someone whose morality is based on how closely actions adhere to a particular interpretation of the Bible strikes me as operating on caprice – why that holy book in particular, as opposed to, say, The Principia Discordia or the Bhagavad Gita? Why a holy book in the first place? Why not some other set of principles that I would find more pleasing? But at the same time I can recognise that I find ethical and moral formulations within my own religion that are useful to me, and can see that another person might find that the best way of applying the ethical and moral formulations they have found is through application of the Bible, which is privileged by being the text of the religion they prefer, for whatever reasons they prefer it.
But the metamoral question that I interpret you as arguing – as saying that some choice ranges have to exist – doesn’t strike me as a refutation of relativism. For it to be a refutation of relativism, it would have to not merely be a metamoral system but a moral one – that the choices about, say, addressing hunger must all exist and that there is a morally correct way of selecting which choice to make that applies in all cases.
If there is no posited way of selecting which choice to make, then the relativistic claim that people will choose according to their individual systems and make different choices remains.
Yes. Nor at all times.
Depends on the religion.
The moral system of my religion is referred to by the word ma’at, which is difficult to define (which is why it tends to be left untranslated). The basic obligations of ma’at are to pursue the health of the self, the health of the community, and the health of cosmos, and that none of these can be sacrificed to uphold one of the others – the self-sacrificing wife who is miserable for the good of her family is not upholding ma’at, because she is neglecting her obligations to self. (The divine is also obligated to uphold ma’at, btw; part of the purpose of the religion is a system of mutual support.) Different people have different obligations to ma’at, because they are in different positions; the obligations of the teacher to support the community are different from the obligations of the soldier at the same level.
There are a number of writings on ma’at, both ancient and modern, and lists of things that aren’t in accord with the system (commonly formulated as ‘negative confessions’ – rather than the ‘thou shalt not’ of commandments, they begin ‘I have not’). There is no canonical list; there isn’t even agreement on what all the translations mean when speaking of specific examples.
That’s something, at least. I feel less like a lot of sound and fury, signifying nothing.
repeated post elided
I still disagree with at least one thing, and I’m not sure which it is. Either I disagree that the commonalities that can be conclusively said to exist among humans are commonalities of morals – like the earlier comment that hunger is not a moral choice, but responses to it may be, and those responses will vary – or I am unconvinced that there are commonalities that can be meaningfully said to exist at all, given the existence of people with a goal of not breathing.
Thank you, btw, for your brief post on the axiom lock subject; it helped me a lot last night before the power went out and I lost all ability to follow this thread for a while.
I’m not sure.
The thing I was trying to say is that the word choice is pointing to different moral systems, not the same one. Even if both parties formulate their system “I should have the potato”, that doesn’t mean they are using the same system; the different presumed beneficiary changes the system. They’re using different axioms of which entity should benefit.
If they’re defined as being in conflict and using identical moral systems, then the formulation can’t be depending on a specific entity who should benefit; it has to be that every actor should be attempting to ensure possession of the potato. That general case then applies to the specific case of X and the specific case of Y.
The former might be a moral conflict – there are different definitions of “good” in play: “X should get the potato” vs. “Y should get the potato”. The latter is a practical conflict, but not a moral one; the same definition of “good” is in play: “Getting the potato is good.”
Yes, actually. Surprised? Allow me to ellaborate. I am not holding that the denigration of suicide is axiomatic. I am claiming that its denigration is a consequence of the definition of suicide, moral systems, and moral agents. Teine seems to be unclear as to some of those definitions, but I think he understands them “close enough” to my understanding that we can discuss whether or not I am right.
Just because we disagree does not mean that we are at an impass.
But that is simply a restatement of the relativist position. That no framework can be privileged (note it does not say that no moral framework cannot be privileged simply that any framework doing so is not itself privileged) is simply a more formal way of saying “In what context?”
Just for the record, I make no such assertion.
Yes, it is. Another is moral systems which apply to moral agents in reality vs moral systems which only apply to moral agents which cannot exist. That too is a context.
But I have been doing no such thing. The context in which moral systems can be evaluated is exactly what I am talking about. I have been disagreeing with you about your interpretation of what context I am using. I have been disagreeing as to which context applies to which moral system. But the context whithin which I have been trying to evaluate moral systems has been the purpose of my participation in this thread.
Agreed. I agree with the general principle of relativism. I simply disagree with its application to morality.
This is another way of saying that there is not characteristics which all people share. I am resolutely unconvinced of this. The fact that they are all people means that they have to have some characteristics in common.
Furthermore, it is not true that no moral system can be established without the full and complete context of a particular individual. I agree that we cannot establish each and every detail of a particular instance of a morality without such knowledge. But I must stress again, that I have never tried to do so. I am only interested in the meta morality as you put it earlier. A say to judge moralities. Even then, I am only interested in judging the first couple of principles of said moralities.
I will do so from now on. I am also sorry that I have been overly sensitive.
No, not really. I experienced a profound sense of awakening when I first discovered that the moral system I was operating under was, forgive the language, crap. If anything, it suggested that the new moral system I had discovered was itself evil.
But if the system you have judges some other system to be superior, why is it not already part of your system? A moral system (per erislover definition) is a system which allows the catagorization of actions into a continuum from Most Moral to Most Immoral. If in that system is defined another system which is judged more moral than the Most Moral, what the heck does that mean?
I agree. However, just because a moral system is not practiced by any particular humans does not mean that it does not exist. Your first sentence would be better stated “A universal morality would have to be applicable to all humans.” It is entirely possible that any particular group of humans could simply be wrong in the moral system they chose. Even if that group were all humans.
No, I think you have phrased this clumsily. I’m not sure you really mean that there is no way to know “what” a moral system is unless we know which people practice it. If you put this together with your assertion earlier that you cannot know the context of another person, we get the astonishing assertion that we cannot know what a moral system is unles we ourselves practice it. Surely this is not where you were going.
There is no circularity. You have simply asserted some things which are incorrect. As always, if you find a contradiction, examine your premises.
But, why do you prefer that method over others for choosing a morality?
Even this goes too far for my taste. I have no way of knowing what your morality is right now. How can I judge it even from my framework or “context”.
Not to jump ahead or anything, but I am doubting this more and more.
Forgive me, but this is a misunderstanding of relativism. I really don’t think that relativism holds that moral systems can only be evaluated by other moral systems. I could, for instance count the number of occurances of the letter “A” and judge them that way. Relativism does not say that this is therefore a moral system itself. It says only that this system is no better than any other system for judging moral systems.
Forgive me again, but this is not a part of relativism either. It does not claim that people will make choices according to their individual systems. Except perhaps by inference. It merely states that no framework exists from which to judge moral systems which is privileged above all other such systems. It really is not a way to determine what moral systems people will adopt or even that they do adopt them. Again, except by inference.
Note that this sounds very close to my adherence to the idea of life qua life as a standard for moral systems. I’m interpreting a little, and possibly way off. But it sounds very close to me. The primary difference I see is that it is not a religious dogma. I am claiming that the necessity to pursue the health of oneself is inherent in being a living moral agent.
In the context of the moral system definition we have been using, these basic obligations would be one principle. Restated, perhaps as “It is good to pursue the health of the self, community, and cosmos.” You can modify the word “good” to be very good, almost the most good, or even most good if you think it is actually the highest good in your morality. The point being that phrased this way it can be placed atomically (so that none of the community, self, or cosmos) can be sacrificed for any other. And so that all moral positions which come below it in your morality are derived from it.
Can you tell me how your religion suggests a person decide on where in his morality to place these basic obligations? If you don’t mind.
I really think we are closer on some things than you think. I’m willing to be wrong, but I think we are much closer on the essential nature of what makes up a useful moral system than you seem ready to recognize.
Good word, BTW. The way you first used it I assumed it was a type for elluded. I had to look it up. Thanks for the word. I will begin using it.
Allow me to asuage you somewhat. I am not proposing that all humans have, adopt, or practice any particular moral principle at all. I am proposing that the nature of a human being provides a context for a primitive method of evalutating moral systems for use by such humans.
Yes, but I don’t think such people can exist. There may be people who have decided to stop breathing. But they had to breath in order to make that decision. The fact that they were people, able to make a decision, and did, in fact do so, proves to me anyway that they had to breath.
Define “usual or expected.” Given that suicide happens on a regular basis amongst humans, it would seem to me that a certain amount of suicides is completely usual or expected. A statistical analysis can be done, and a certain point on the resultant bell curve can be selected as the point beyond which is considered “unexpected”, but the choice of that point is based on the preference of the person or persons doing the analysis. This is what I mean by “whose definition” (Hah. Spelled it right this time. Stupid me). What percentage do you consider to be “beyond the usual or expected”? Is it 1%? 5%? 0.0004%? Or, if you’re knowledgeable about statistics, 3 sigma, 6 sigma, 9 sigma?
Sure.
They are not the same moral system, yes.
A moral agent adjusts/changes moral systems based on choice.
There is no reason to believe that any judgment framework is superior in an absolute sense, no. But relativism hardly precludes making a judgment that a particular framework fits a given moral agent’s needs at the moment better than another, in the opinion of the moral agent. The choice of judgment framework is arbitrary, but it is not random. It may be, in some objective sense, no better than randomness. But to fully know whether that is true requires a level of understanding that I do not think it is possible for a human being to reach. So we do the best we can, which is to rely on our experience, our ability to reason, our gut feelings, and our instincts.
I use definition 1b. A moral system describes a code of conduct put forward by a society or accepted by an individual for her own behavior.
That is perfectly acceptable, yes. I’m using somewhat extreme examples for simplicity. So postulate that prior to the decision, breathing was in the top 10 morally good items list. After the decision, recognizing that breathing is at odds with, or at least not useful for, the new goal, it is reduced in value until it is no longer on the top 10. If it is enough at odds with the new goal, it could even be reduced to a negative value.
I postulate that moral agents are choice making things which move from moral system to moral system as they deem necessary, yes. I don’t believe that this is a random process, but one based on conscious or unconscious choice. As for telling the difference between a moral agent and something not acting on a morality at all, that is a difficult question. First, I believe that moral agents are conscious, thinking beings. So, if the thing in question is not conscious or thinking, it would not be a moral agent. Second, I believe that any conscious or thinking being must have a morality, based on the definition I gave above. If there is any sort of code of conduct that can be described by the behavior of the individual, then that code of conduct is the morality for the individual. Even the pseudo-random “I will follow the dictates of the Magic 8 Ball” is a moral system. I suppose that it would be possible for there to be a being which is indistinguishable from a conscious, thinking moral agent but not actually conscious or thinking. That would be an incredibly good programming job, I guess.
If you would qualify your statement by saying “all humans who wish to remain alive”, then I would agree that it is a common need. I hold that there are humans who do not wish to remain alive, that they are still moral agents, and that they can still be said to be “living life on earth” by the definition you gave. If you’re going to redefine “living life on earth” to not include those people, then you can do so, but I will no longer consider the system you are developing an absolute system, as it will not apply to people I consider to be people.
I can postulate a moral system which holds suicide to be the highest moral good, but still requires breathing. I won’t, because we’re really getting far afield here, and I really don’t want to start making 20 page posts. This one is already long enough.
I will, however, point out the Samurai of Japan. In certain circumstances, suicide was the highest moral good for them.
No system of evaluating moral systems is preferable in all circumstances. There’s a reason I started putting that clause into my definitions. Perhaps I should say without some arbitrary choice, instead, or possibly in addition to. Nowhere does it say that the choice cannot be made, simply that such a choice is, indeed, arbitrary. And I agree that a moral system based on suicide cannot be preferred * in all circumstances* over one with suicide much farther down the hierarchy, That doesn’t mean it couldn’t be chosen by a moral agent for whom it was the best fit.
I put the “I think” clause in there to indicate that this was my personal understanding. I believe that no such absolute framework exists because the possibilities inherent in the combination of moral agents (people) and their needs are infinite. I’m not a philosopher, I’m an engineer.
What’s not to understand? That evaluation cannot be made objectively, in my opinion. It requires someone to make a subjective judgment, most probably the person asked “Did this interfere with your choices?”.
Yes, a moral system can be neutral on a particular question. But on any question for which it is not neutral, it must allow for complete, objective resolution of the question. And it can’t be neutral on all questions, or it’s hardly a system at all.
You’re correct. I used precise when I should have used clear, or unambiguous.
Subjective also means “characteristic of or belonging to reality as perceived rather than as independent of mind“ or similarly “open to different interpretations based on prior experience or expertise.” You can collect all the evidence you want, but unless you have an objective measure, you’ll end up interpreting said evidence by subjective means. Generally, to be considered to be an objective measure, a measurement has to be repeatable by a number of people, all coming to the same result.
Whether they are living their life the way they want is immaterial to the fact that they are still living their life. They are just living their life under extreme constraints. Also, that does not sound to me like living someone’s life for them. Rather it sounds more like attempting to destroy their life.
And it is possible that I am wrong. At the moment, though, my experience is leading me to the conclusion that we are coming to different conclusions because we have different subjective viewpoints.
It is not completely subjective, no, in that you are gathering objective data on the noted behaviors. But as soon as you make a judgment as to what is causing those behaviors that is not objectively verifiable, you’ve entered the realm of the subjective.
I know that you didn’t include those. I merely brought them up to be as clear as I could.
Why should I believe your claim that you share axioms over his that he does not agree with you? I presume that he knows his beliefs better than you do; your argument counter to that strikes me as being dismissive of his claims of difference, and not conducive to civil discourse.
(I would hold this belief if you were talking about a stranger; given that I have had dinner conversations with Teine discussing the differing axiom-sets in play in this conversation and thus have somewhat privileged information, I am quite positive that you are mistaken.)
He has not struck me as being unclear; on the other hand, you have. Further evidence of differing systems in play.
No, because it is not referring to context at all. It is explicitly phrased as an absolute; absolutes need no context.
Moral characteristics?
Because I haven’t thought of it yet? I’m not omniscient, after all. I have to get some of my good ideas from interactions with other people.
Why are you presuming that a moral system must define itself as the most moral that exists? Why not presume that it is the most moral known to its user? Thus, when the user gains additional knowledge that reveals the inadequacies in their system, they can choose to mend those inadequacies.
How do we make that judgement without a human being involved?
How do we know that that human is not the one who is simply wrong?
If nobody practices it, it does not exist in any functional way as a system.
One can make hypothetical systems, but unless someone adopts them, they have clearly not been deemed to be better.
I could give you an incomplete version of my moral system if I wanted to go to the effort; I could not give you a complete one for reasons of subjective interpretation and, for that matter, the limitations of language. (Also, I am also often unaware of nuances of it until they are either explicitly discussed as a result of conversations such as this one or as a result of needing to make a moral choice in a realm previously unexplored.)
Because it’s the only one I have. I cannot step out of my head, divorce myself from my biochemistry, wipe away my experiences, or abolish my life. Given these limitations, I’m pretty much stuck with them.
How can you judge anything, then? You will always have incomplete knowledge.
And that would be an excellent way of evaluating what system has the most of the letter A. Irrelevant to moral relativism unless ‘has the most of the letter A’ is posited as a moral good. (At which point that claim will be evaluated by all of the extant evaluations of moral good that are involved in the argument and ranked by them.)
Because all of those systems are of necessity used by individuals.
And that is a component of the system I use. It is not a component of other systems. I’m not interested in making it a component of other people’s systems, especially if it’s not useful to them. (My particular most frequent personal failing is a tendency to fail one or another of those three for the sake of the others, most frequently to yo-yo wildly between self-care at all costs, and maladaptive self-sacrifice “for the greater good”. Someone whose most frequent personal failing is consistent selfishness might do better with a Christian system valuing forms of selflessness, and someone whose most frequent personal failing is obsessive worldliness might find that Buddhism’s ideals of detachment are preferable.)
What are that person’s current obligations? What obligations do they wish to take on? That’s a start. Obligations are defined in part by things that have been taken on consciously, and also by roles as a member of society both in general and in specific. (See previous comment about the different obligations of a teacher and a soldier.) If to decide whether one’s place is to be a teacher or a soldier, the answer is “listen to their heart”, which is considered the seat of moral reason; their differing aptitudes and preferences and their hearts are all in accord with their shai. (Shai, loosely translated, is ‘that which one will accomplish unless prevented from doing so’; it’s something like a calling. Different people have different shai, obviously, because a community is not functional if everyone has a calling to be a traffic cop and nobody has a calling to grow potatoes.)
(I couldn’t figure out a way of involving kangaroos there.)
If I take as my goal to get fit (a goal which, obviously, I have not had success with in the past, because otherwise I would be in better physical condition), the side-effects that will come along until the point that I manage to do so will include fatigue, muscle pain, cramps, soreness, changes in eating habits, and so on. These are obnoxious things that happen along the way to my goal, and may in fact interfere with accomplishing my goal (if I do myself joint damage, for example, which I am prone to do). If I fail to accomplish that goal, I will continue to have muscle aches when I exert myself as an irritating proof of my failure.
If someone takes as their goal suicide (a goal which, obviously, they have not had success with in the past, because otherwise they would be dead), the side-effects of continuing biological life will persist along the way until they manage to render themselves into a state where the obnoxious things that may in fact interfere with accomplishing their goal stop. If they fail in accomplishing that goal, they will continue to breathe as an irritating proof of that failure.
Normal or expected. Given that suicide in the United states occurs in something like 19 - 20 out of 100,000 people, it would be silly when looking at the behavior of any particular humand to think of suicide as “usual or expected”. does that help?
Why? Why not based on randomness? Why not based on reading the bones? If not all moral agents choose their morals in the same way, how can you say that they choose them based on choice? I assume you mean free choice.
No, but it does hold that fitting “a given moral agent’s needs at the moment” even “in the opinion of the moral agent” is a non privileged framework from which to evaluate the possible moral systems. If you agree that a moral agent must pick a morality that fits his needs, then I think you have moved one step closer to my position. Now, all we have to do is get you to understand that some needs may be common to all people.
All of which are frameworks (or could be) which are themselves not privileged above any other. Again, why rely on even these when there is no reason to suspect that they are better than flipping a coin, counting the number of occurances of the letter “A”, or even writing the moral system down and weighing them?
That’s fair enough. It seems to me that “code of conduct” is essentially the same thing as “complex of philosphical beliefs used to determine right and wrong”. Althoug it is perhaps a little less pedantic.
But what we were trying to prove was your assertion that “And what I’m saying is that I do not see a moral agent who holds that suicide is the highest of moral purposes to be as strange as you do.”. So, given a code of conduct which holds suicide as the most important conduct, you don’t find anything self contradictory about a person trying to live his life by that code. Correct?
No, you are missing what I am saying. Imagine that suicide is in the top 10 but not in the top 5. Now it so happens that the only way to achieve one of the top 5 is by committing suicide. In such a case, the moral actor has neither changed thier morals nor moved any of them. He has simply given up one value for a higher one.
But look at your definition of moral system again. Where does it indicate that the code of conduct mus contain rules for altering (completely in some cases) the accepted code of conduct. You are postulating something entirely new here.
And again, something new. There is a “code of conduct accepted by an individual for her own behavior”. Where in that is it implied that a new code will ever be accepted? Moreover, where is it that this new code will not be accepted by a random process.
I agree. I think this is implied in my definition by the “complex set of philosophical beliefs” part.
True enough. We have already agreed that the corpse is no longer a moral agent.
Well, I agree, but I’m not sure that you can derive that from the definition of morality. Your defininition is that it is a “code of conduct accepted” by the moral agent. I’m not sure that this implies a being which could be a moral agen must be.
I can accept this conditionally. I reserve the right to questio where you are going with it.
How about “wish to remain alive as one of thier top few moral principles”?
No, I don’t think so. But let’s assume you are right. Assume that a moral system exists which holds as one of its top few tenets that dieing is one of the greatest things to do. Even a moral agent who wishes to make the decision to choose that moral system must breath long enough to do so. And must breath long enough to carry out the edict of this new moral system. It still seems to me that the need to breath follows every live human until his death.
People, yes, but dead people as living a life on earth? I do not understand the sense in which you mean that.
Quite. As I said, my doubt is that you can postulate a moral system which does not include in some small way a dictum to satisfy the need to breath. You could postulate a system wherein this dictum is one of the bottom. But I don’t know what it means to be a human without the need to breath. Seriously, I don’t.
No, suicide was the only way they could maintain or asuage thier honor. Honor was the highest good. Suicide was only a means to that end.
Correct. But not all circumstances are rational.
But relativism holds that “best fit” is itself a non privileged framework.
If I may, this implies an undefined nature for the moral agent. If we assume that the moral agent randomly chooses a morality for no reason whatsoever, then you may be right. But as soon as the moral agent has a reason to look for a moral system, that reason suggests a framework from which moral systems can be judged.
That may be true in some odd cicumstances, but if you simply look at external actions, it could be quite easy to see that the threat of force, for instance, or the destruction of property, did indeed reduce the choices of another agent. There may be some borderline cases involving fraud or deception or some other form of verbal persuasion. But that does not mean that the formulation “without interfering with the choice of another” is always subjective. As I said, it may be unclear, but this is not the same thing as subjective.
Quite. Further, for our purposes, we are not trying to define an absolute moral system down to the brass tacks so to speak. We are only interested in whether or not one or two principles could be said to be absolutedly required near the top of all “proper” moral systems, and to say that from an objective, unique, absolute and knowable framework.
Exactly. A life lived without choice is a life destroyed.
Forgive me. You are not reading my post carefully enough. The reason you should believe my claim is because I am not claiming the my position on the issue you mentioned is an axiom at all. How can we have axiom lock on an issue when my position on it is not axiomatic?
But you aren’t saying that I am discoursing uncivilly, are you? Of course he knows his beliefs better than I do. I presume you allow that I know mine better than you, yes?
Again, just to be clear. I am not claiming that I have more knowledge of your, Teine’s or anyone elses beliefs. I am only claiming (in this paragraph) that we do not have axiom lock over the issue of suicides position in a moral system because it is not an axiom of my belief set. Are you claiming that yours or Teine’s includes axiomatically that moral systems do or don’t hold a certian value for suicide? I tend to doubt that.
Even if this were true (note that he only offered a definition in his last post after I wrote that he seemed unclear) it is not evidence of differing axioms. It is only evidence of unclarity on my part, his part or yours. There is not need to postulate any sort of deadlock when none is necessary to explain the evidence.
I don’t understand how you are using the word absolute then. Perhaps you can provide a definition which demonstrates that they are contextless. My understanding was that they applied to all contexts. Just as the word “context” does, for instance. I myself pointed out the critique of moral relativism in the Wikipedia article linked to earlier. Namely that it seems to be itself an absolute framework, and so not preferable. I am not particularly satisfied with that critique. It seems that you are rephrasing it.
No, I am not claiming that. But characteristics which may provide a framework from which to evaluate moral systems in some way.
But if you have never encountered it, have no conception of it, how can your moral system already contain a judgement of it?
Because the relativist position seems to me to be that there are no such “inadequacies” which are not part of the moral system. Or rather no way to judge them except from within the moral system. My question is the if a moral system knows enough about an inadequacy to pass judgement on potential solutions to it, why are those solutions not already part of the moral system?
I think you have suggested an entirely new critique of moral relativism. I have not thought long enough about it, but it seems that I might be able to propose a formulation under which it may be impossible for a moral agent to accept any morality other than some absolute one under the restrictions imposed by moral relativism.
I never suggested that we make a judgement without humans. I simply said that a moral system might exist even if no particular human had adopted it. It might be written down, for instance. You are asking another form of the same questions I have asked above. How do moral systems get built. If they cannot exist outside already practiced moral systems, then they can never contain new principles. No?
Maybe, but why do we add “in any functional way” here? That has not been part of our definition of moral systems before. I have been arguing that they must be capable of being functional, and you guys have been disagreeing, but we have never before required that any particular moral system actually be utilized before now.
Yes, this I agree with. But just because a system has not been “deemed” to be better than some other system does not mean that it is not actually better. It certainly does not mean that it does not exist or that we cannot discuss, evaluate, or think about it.
Right. But before you said
This indicates to me that if I were to understand your moral system, I would need to understand your “context” (which in this use I take to mean every subjective detail about your thoughts, beliefs and experiences).
Taken together, these two propositions seem to me to indicate that it is impossible for me to evaluate your moral system. Or that of any other person for that matter. Given that, it seems impossible, once again, for me to be able to add or modify my own moral system based on moral systems encountered outside myself.
This is an aside, but this is precisely why I find this thread so valuable. I am delving into the implications and nuances of my own beliefs like I have never done before (with the possible exception of the first time I encountered them I alluded to earlier). I am increasing my understand of myself with every paragraph. This is also why my posts are getting longer and longer. I find so much value in each paragraph I cannot bear to cut them.
But this is not true. Ok, you can’t erase your brain, but you can learn another method of choosing moarlities besides using your present one to judge them. I suggested a couple. Given that, there is no reason to privilege your method over any others. Right?
Well, you are the one postulating that one person cannot know onothers “context”. And that this means he cannot know anothers morality. I am not requiring this to be true at all. I beleive that people are more similar than they are disimilar. That we do, in fact have a common contex from which all of us can be communicated, evaluated, and shared. I only meant that I do not feel comfortable making any judgements about your morality right now because of a lack of knowledge.
No, not at all. This framework is not a moral argument nor indeed a moral framework. It is, however a framework by which moral systems could be ranked. The question becomes why could it not be privileged above all the moral systems in question? I’m not saying it should or shouldn’t. But relativism says the it cannot. Specifically that you cannot claim that any moral system which could other wise be used to rank moral systems cannot be privileged above the “counting of 'A’s” system. That is the second tenet of moral relativism.
Except, of course, that evaluating a person’s most frequent “failing” requires the adoption of some moral system. If the first system adopted does not register that “failing” as a failing, then they have no need of any other system. Right?
You see, again, I really find this very close to the idea of the “life of the individual” I was talking about earlier. If I may use your terminology (and please forgive and correct me if I misuse it). It seems to me that one could postulate a moral code which place the fulfilment of the “shai” at the bottom of the list. It seems to me that under the metaphysical proposition you have laid out, such a code could be absolutely evaluated as less applicable to people with “Shai” (what’s the plural) than a system which did not place them at the bottom absolutely. In other words, such an evalutation would apply to all people with “Shai”. I assuming that you hold everyone has such a thing, yes? If so, then such an evaluation would be absolute for all people.
Again please forgive me if this in some way insults you. I am struggling to understand.
LOL Perhaps the hopped out of context?
Let me stop you right there. There seems to be a drive on your part to adopt one goal at a time. Why is that? Why do you “take as your goal” when you could “take as one of your goals”? Conversely, it is not true that you have not had being fit as a goal, it is simply true that other goals have come first. It is up to you to decide whether this hierarchy of goals is appropriate for you and to change it if need be. But it does not (in this case) require adopting new goals out of the blue nor dropping completely others.
But you see, here again, you are over simplifying the case to one in which the moral agent chooses suicide an nothing else to value. I understand what this would look like conceptually. I simply deny that it is possible in a human being. He would still have to breath and move among a great many other things while he positioned himself such that he could achieve this goal. He would, in short, have to achieve many other minor goals along the way to achieving suicide. All these other goals could be shared with every other human on earth.
Even this freakishly impossible person (who has chosen suicide as the only end he is the means to) has many things in common with the rest of us. That in and of itself should demonstrate that your postulation of no common characteristics which might lend themselves to a moral system evaluation framework exist.
Ok, so it is your formulation of the framework that all people use to evaluate which morality to adopt.
The problem then becomes that you have to prove this is the case. Namely that all people use this framework. I have taken on the much simpler task of suggesting that there is a framework which everyone could use and which is objective, unique, absolute, and knowable. I have not made any assertions as to whether or not anyone or any group does, in fact, use this system.
Ok, but I think this is the end of a much longer chain of conceptualization which we have not done yet. I think it may depend on a lot of assumptions which may not be shared by the rest of us. Feel free to elaborate, though.
If we judge them, yes.
Well, your perspective and mine have to differ if for no other reason than you do not occupy the exact same space as I do. However, depending on the scope of the judgement being made, we may not differ at all. Consider the possiblity that you and I agree on a definition of “human”. Consider that we both then look at a particular human. Even with our different perspectives, we might both judge that yes, in fact, that thing is indeed a human. Just because we have different perspectives does not mean that they are so different we cannot agree on anything.
I’m not so sure about that. Not without some additinoal clarifications anyway.
I’m afraid this seems a little circular. We judge as good that which we find good?
This is a much better formula than “happiness” you were using before. I’m not convinced that such a thing exists, but this is better. Be careful not to confuse this quantity with the standard definition of “satisfaction”.
Right. But now you are not only postulating that all people’s actions are performed on a particular basis, but that this basis is quantifiable. You now have to prove that such a quality exists and can be measured, AND that all people use it. If even one person does not use this formula your proposal, it seems, fails.
This is the proposition that you need to find a way to falsify. What would a moral agent who did not do this look like? Find a description of that, show that such a moral agent must look that way, and then show that a moral agent looking that way is impossible.
You see, I think this example is good evidence that we do not, behave as you suggest. I think there is virtually no evidence that this is true.
Will that do?
Well, this is certainly one problem with getting everyone to agree to such an absolute, objective, and unambiguous method. It is far from the only problem associated with constructing such a method, however. This is one reason why I have not attempted to do so.
Fine. We have now established your threshold for “odd”. Personally, I have a different threshold. Without personal knowledge of the people around me, I would not find it unexpected or odd for one of them to commit suicide.
Why not rely on them? Most people have experience using these tools, and find them sufficient. In fact, I do believe that there have been people in existence who did, and still do, use bone reading, tarot cards, and in fact various random methods to help them make their choice. They are still making a choice.
I disagree that a moral agent must pick a morality that fits his needs. I agree that a moral agent will most likely pick a morality that fits his needs, but it is hardly a certainty. Surely you know how contrary humans can be.
You’ll have to forgive me if I find this a bit condescending. The smiley didn’t help.
Exactly, and I never said any different. So they are not privileged. Who cares? They are useful. So are others. Once one accepts that no system is privileged, one can just pick whichever framework one is most comfortable with.
Correct. The life will most likely be short after the decision is made.
True, but that is different from what I originally proposed. Either is possible, depending on the person.
No, there is nothing in my definition of moral system that requires rules for altering the accepted code of conduct. It is entirely possible for a given moral agent to stick with the first moral system they ever developed for the entirety of their life. However, through experience I have found that most people do modify their moral systems over time. Whether and how they do so is their choice.
It’s not there, so you can stop looking for it. As I said just above, a moral agent need not ever change their moral system. And if they do change their moral system, they can choose to use any method they want as a tool/justification. I don’t care. You’re the only one who seems to.
I don’t know how much more simply I can put it. A human who wishes to die doesn’t need to breathe. In fact, ceasing to breathe would be a perfectly valid way to meet the goal of dying. They are still human, at least until they die. Perhaps even after.
Not dead people. People who wish to die. There is a difference.
I would say that the circumstances (which included having honor of a particular form as a high moral good) elevated suicide to a high moral good. In some cases, properly committing suicide could raise the prestige of a samurai after death.
So? I never said it had to be. In fact, I think I pretty thoroughly implied, if not said outright, that such a choice was arbitrary and based on personal preference.
Except that you’re missing that there is no single reason that moral agents pick moral systems. In fact, the reasons moral systems are chosen are about as varied as the moral agents that pick them.
Simply looking at external actions isn’t sufficient. It may appear from externalities that choices were interfered with, but the true measure of whether choices were interfered with is whether the moral agent in question believes that their choices have been interfered with. A choice has not been interfered with if the choice that was removed or altered was not one that the moral agent considered a choice. Also, externalities will miss instances where a moral agent adjusts the choices that they would pick based on objectively unrelated incidents.
Which is not the same as living their life for them, which is where we started with this particular item.
Ok, I think. Are you sure, though? You would not find it “unusual”? Really? I suspect I am being whoosed or you are being deliberately obtuse.
That is exactly the relativist claim. That there is no way to rely or not rely (in any way other than faithful execution of some system) on any of the possibilities for evaluating moral systems.
I agree. Why not propose that moralities which include using these tools for this prupose amongst the moral things which can be done are therefore superior to moralities which do not?
I don’t think I said this. On preview, I did. I said it in the form of a badly worded question. Let me try again. Ok, you do not agree that a moral agent must choose a morality based on his needs. I do not either. I meant should or some other word. More precisely, I meant that moralities are picked or used in order to satisfy some need. Can you agree to that?
You are forgiven, and I appologize. I did not mean it that way. It was intended as a friendly jibe.
Ok, but why is “comfortable with” the framework we should use? What about that helps me decide which morality to choose?
Well, no, most suicides fail, if I recall my statistics correctly. Assuming that such an agent succeeds, however, your right the life would be short. But to be clear, you see no contradiction in the idea of a life lived for no other purpose but to end itself. You see no way to absolutely evaluate that moral system over another which places other goals above suicide. Correct?
Quite, except that your formulation required, or at least placed, suicide in the top ten without breathing. I’m trying to make the distinction that actions can be taken without moving one’s morals all around.
But then where do they get one in the first place and why is that method used over some other? Is it really possible for a moral agent to live and die entirely without morals? That seems both an implied possibility and a contradiction.
FTR, I don’t care about anyone else’s morality. I am truly only interested in mine.
Yes they do. In order to wish to die, a human needs to breath.
Quite. It would not, however allow the person to choose to stop breathing.
Yes there is. And people who wish to die, especially people who wish to die for no reason other than they wish to actively seek out that difference. A person who wishes to die for no other reason than he wishes it seems to me not a person. Or at least not as much of a person as someone who wishes to live or even as someone who wishes to die for some other reason. I understand your reluctance to avoid such a formula. Perhaps we have reached an axiom lock after all. Can you at least try to formulate what such a thing would look like? A person who for no greater reason than his own choice decides to die? I can’t comprehend it.
Yes, but you would be wrong. Suicide was always was always considered a pretty high moral good. Just not as high as honor. At the same time neither before nor after did the relationship between suicide and honor change in the Samurai’s moral code. It just so happened that under that code certain circumstances arose for which suicide was the only “good” action to take in order to maintain or achieve honor. There is no need to postulate that suicide, honor, or any other of the samurai’s moral positions moved in any way.
[/QUOTE]
So? I never said it had to be. In fact, I think I pretty thoroughly implied, if not said outright, that such a choice was arbitrary and based on personal preference.
[/QUOTE, ok, but then it begs the question. Why use that standard?
This is certainly true. It is also irrelevant to the point I am trying to make. The fact that there is no single reason for any moralities over any others does not mean that there may not be some absolute reason to do so.
No, not for all possible circumstances, but for a great many it will be.
No, it is not. The agent in question could have been tricked. Or killed. Or for some other reason they may not know that their choice was interfered with. All of which is irrelelvant to the point that certainly some objective “interfering with choices” can be detected.
I agree. Again, I did not say that all cases of choice interference were objectively detectable. I merely said that not all such cases were entirely subjective.
No, it was not where we started this subthread. We started it me suggesting that other characteristics of a “whole life” besides biological length may be objective and you denying it. We have ended it with you insisting that the lack of evidence for a demonic possesion of some sort is evidenc that there are no such objective characteristics.
This is indeed what I have understood to be your position. I think that this is because you haven’t quite grasped it, or “internalized” it. However, I don’t think that this is a failure of yours so much as it is a failure of mine to communicate it well enough–or a failure of language to allow me to.
It is a CLOSE restatement, but I’m not sure that it fully communicates the essence. I’m not having much luck with these either. I keep coming up with refinements that edge closer to it, without quite getting there. Perhaps a few more tries?
Moral agents want to “shape” the world. (Too vague)
A moral agent sees that an opportunity to choose exists: The moral agent wants to make the choice. (Too strong)
A moral agent sees that an opportunity to choose exists: The moral agent wants the choice made to be the one he would have made. (Not strong enough)
Another approach. Try to “read between the lines” on this one. I am thinking about watching my children develop. At first, they waved their hands in front of their faces and looked surprised that a hand went by. Later, they would hold their hands in front of their faces and watch in fascination as they controlled the opening and closing of their fists. Later still, they would wave their hands at objects and respond with glee when they hit the objects and caused them to move. Much later, they would pick up an object and do something to another object with it. Along the way, they have encountered instances where they set out to do something (say, eat a bug) and were obstructed by me. Another moral agent intervened and thwarted them. Since then, they have been practicing various techniques aimed at ensuring that I don’t thwart their goals.
What I witnessed was a progression of efforts to try to control reality as they saw it. The appearance of another person (or even a recalcitrant toy) results at first in pure frustration and later in various attempts at problem solving. I think that this desire to successfully “do”, to successfully change the surrounding reality, is the base absolute good for moral agents. It is selfish, but in an innocent way. I think that the pursuit of this kind of success results in 1) traditional ideas of problem solving such as experimentation and hypothesis testing and, when dealing with other agents (which are not as predictable as toys) primitive-to-complicated “moral” systems designed to prevent other moral agents from thwarting their goals.
I think that, for many, empathy is a result of this. There is something innocently amusing about a child who hasn’t quite figured out yet that other people’s wants are different (centered on a different agent). My 2 year-old would tell me she wanted something as if that should naturally mean that I must want her to have it. It dawns on them in stages that my wants aren’t necessarily exactly in tune with theirs. I think that as humans advance, they start trying to imagine another’s wants so that they can negotiate better. For many, this leads to an ability to imagine another’s pleasure and pain and, combined with evolving notions of “fairness”, to wish to avoid inflicting pain. At some level of functioning, we start to actually place value on something based on someone else’s system (or our understanding of it). I think that this leap is what leads to the “higher level” morals of the type that get discussed in this thread. Nonetheless, I don’t think that the base desire to “do” is ever displaced or demoted in its importance. Because I love my daughters, I want them to have things that make them happy. Because I have the same “base” priority of “doing”, I get a singular pleasure when things that make them happy are provided by me.
In the midst of all of this, we are never outside of our own bodies, senses, feelings of pleasure and pain, etc. On top of that, we are continually observing reality, seeking cause-effect patterns, noticing correlations, and creating an internal “map” of how the world works and how other people work. And each one of us is examining a different perceived reality using a different toolset and, therefore, creating a different “map”. Often, the map is not accurate. Many times we think we’ve found a cause-effect when, in fact, we have only observed a correlation (superstition). Nonetheless, our brains are similar, and we are often able to understand the choices of others based on a subset of their knowledge and experience. It is based on this (and a background in psychology) that I feel comfortable positing a path from every behavior we have discussed so far back to the inherent desire to “do”.
Suicide keeps coming up again and again in this thread. I’m not sure if this is your intention, but a number of your posts give the appearance that you are trying to start from a moral stance (suicide is bad) and work your way backwards to an objective justification for this stance. It struck me that in one case you suggested that no valid moral system could contain suicide as a “highest” goal, particularly since I have reason to believe that a number of Muslims view a particular kind of suicide as being, in fact, the most moral act they can commit. To me, understanding this is just a matter of understanding why they think that this act is “effective” in pursuing their goals.
My father committed suicide when I was 7, so it is a subject that I have given quite some thought to over the years, particularly suicide of the type committed by those who are depressed, which he clearly was. You seem to be coming from the angle that a depressed person who commits suicide is either committing an absolutely immoral act or that their moral systems have failed. To me, they are acting in a reasonable way wrt unreasonable circumstances.
For many people who are depressed, there is something not working correctly in their brains. Events that would make most people happy don’t produce the expected feelings of pleasure. And feelings of displeasure are almost continuous. What this creates is a situation that trains them to think of themselves as ineffective. They try numerous strategies to “make the brain happy” and nothing seems to work. It would be like continualy kicking a rock and having it not move. At the same time, they are having bad results in their efforts to “control” other people. She doesn’t love me like I want her to. No one understands how I feel. Everybody thinks I’m stupid. At a certain level of depression, suicide starts to seem an efficacious act: It will make the pain go away. It will make everyone understand how unhappy I’ve been. It will make X feel sorry for treating me so badly. I interpret this as a last resort way of taking some control, of making a choice that has the desired effects.
In the same way, someone who is in pain from a terminal condition can get to a place where pain prevents any meaningful activity and the only meaningful option is to find something to make the brain stop screaming. Of course, our society currently denies many people the dignity of this last meaningful chance to control their own lives (but that’s a different debate).
In sum, even when I can’t always explain it, there has yet to be an example mentioned here that I can’t pretty easily see a way to trace back to this inherent desire for choosers to choose, in much the same way that the amoeba chooses to envelop this food item, copy itself, and continue, if only as a way of moving to the next food item. I notice that a lot of people seem to recoil from this notion of a “base” morality that is so self-absorbed. But I think it is a natural consequence of being a separate, individual being, and it is the source of the amazing diversity of thoughts, beliefs, opinions, and “whole lives” that surround us. And the fact that self-absorption can evolve into enlighted self-interest and even apparent selflessness is NOT some sort of justification of selfishness. It is a demonstration of magificence.
Think nothing of it.
This is the version that describes what I was trying to explain. They use the same system because it is inherent to them. They focus it on different agents because they are different agents (not that this is any better than the way you put it).
Y’all are driving me nuts with this suicide vs. breathing thing. Can we go back to talking about bondage?
I don’t know if there is axiom lock here, but there are definitely competing definitions. I suggest taking a couple of steps back and trying to find where the paths diverged and trying to resolve THAT difference, whatever it is. Barring that, we can’t be more than a post or two away from a Clintonian “that depends on what your definition of ‘is’ is”.