** Few societies have come up with truly innovative strategies. We can usually look at societies that have attempted to deal with similar problems in similar ways, and see how things worked out for them.
In other cases, we can attempt to apply logic to known variables and deduce outcomes. This is a limited method, prone to error due to insufficient data and user bias, but it’s all we’ve got. (It wouldn’t have taken a rocket scientist to tell the original natives of Easter Island that they shouldn’t’ve cut down all their trees, for example.)
There are a few general principles that can be gleaned from observing developing systems that are sometimes useful. For example, sacrificing long-term goals in favor of immediate payoffs is rarely advisable.
** You mean, if the society acts in a way that will eventually cause it to fail, the society will inevitably fail? Well, yes. Obviously.
Um, no. Is a species evolutionarily successful because it has at least one member living at one particular time?
** No, the rightness of a particular ethical system can only be determined relative to the workings of the universe.
Do I have to keep explaining the same answers over and over again?
** Why do you keep asking the same question over and over again when it’s been answered pages ago?
As mentioned before, sacrificing the future to benefit the present is a pretty bad idea. The aspects of the environment that appear most harmful in a balanced system are the most vital to its survival; the beneficial aspects can be the most dangerous All behaviors can be reduced to selfishness, but the definition of ‘self’ is not always obvious. Certain sustainable growth is better than uncertain nonsustainable growth. To produce the most adaptive ideas, place as few restrictions as possible on their generation, but make the selection criteria stringent. Don’t put all your eggs in one basket.
There are more, but most of them are difficult to describe in only a few lines, and many apply to specific scenarios.
** Yes, actually. And it isn’t always chronological time that’s important. Evolution is best measured by generational time.
** Well, the best way is to try variations on the society’s theme, but lack of time and resources usually prevents this. We’re often forced to resort to logic and minor experimentation.
I try to respond to all posts, but they pile up so quickly… and it doesn’t help when posts are repeated. [taps toe rebukingly]
Organisms that we find in the world are usually part of evolved ecosystems; as such, they don’t have those goals, or they wouldn’t have stuck around long enough for us to become aware of them.
Interesting things happen when we start mixing ecosystems; new configurations often lead to catastrophic failures.
It may be possible for an individual to have such a goal, but not normally a species. A species might be artificially created with such a goal, but it wouldn’t be around for long.
You mean the laws that govern the workings of the universe? Yes, I agree – the effective strategies are indeed dependent on the laws of the universe.
Not at all. We don’t make those judgments, any more than our opinions as to the worthiness of a design has any effect as to whether it works. A flawed design is objectively flawed, even if we don’t perceive it. Likewise, a working design will work even if we’re convinced it’s flawed.
I have already answered it. Not only that, I consider the answer virtually self-evident; I’m having a great deal of trouble understanding why you all don’t understand it.
[sigh]
It’s trivially obvious that people who have certain beliefs find them appealing, satisfying, and correct (even when they’re not). Therefore, the worth of the moral system can’t be dependent on whether some person likes it; we can easily see that it’s always possible for there to exist some configuration of an organism that values some arbitrary things, much like it’s always possible to make a computational device that reaches some arbitrary conclusion.
The question then becomes: what is the point of moral systems? What is their function? Surely when we know what they’re for, we’ll be able to tell which ones do it best.
The classic example of the wolves in famine times demonstrates this perfectly well. The wolves don’t have enough to feed both themselves and their pups. Some wolves feed themselves, allowing their pups to starve. Hopefully, they’ll try to have pups again when conditions improve. Other wolves “altruistically” feed their pups. They starve, but their offspring are a year ahead.
Which strategy is correct? It depends – if the advantage to the altruistic descendents is greater than the advantage to the selfish parents, the population of wolves will slowly become more altruistic. If it’s the other way around, they’ll become more selfish. If neither, they’ll remain partly selfish and partly altruistic.
If the “correct” strategy is whatever the wolves say it is, then the correctness is determined by the ability of the strategies to persist through time.
** Because that’s the result I got when I analyzed the natural of ethics. It’s the same reason that I concluded “A=C” when presented with the statements “A=B” and “B=C”.
Asked and answered.
Ask someone which is more important: that a bridge stays up when they try to cross it, or that the bridge has some other arbitrary property. Guess what they’ll probably tell you. Go on, guess.
Not extraordinarily well, I’m afraid.
** So? This is the same problem with all attempts to understand the world: limited data produces uncertain resulst, and unlimited data is not theoretically available.
We could try finding an answer through logical assessment of the situation’s known properties, but that’s a second-rate solution at best. It’s like trying to determine the results of an experiment by sitting in an armchair and thinking about it.
It’s conceptually inevitable. Either our concepts bear no relationship to how the world works, or they do. If they do, my conclusion follows; if they don’t, my conclusion doesn’t follow, but this entire debate is pointless.
** Biological evolution is a special case of the more general meaning. Of course, ‘evolution’ means “change over time”. Combine the assumption of change with the assumption that some states are more difficult to change than others, and ALAKAZAM! we’re forced to a conclusion.
** Part of the problem is that there is no scientific consensus on the nature of life; consequently, it’s not surprising that scientists can’t agree on whether there are living ideas or not.
Meanwhile, culture continues, science marches onward, ideas are transmitted and spread throughout the noosphere.
To me, it feels like much like arguing over whether it’s theoretically possible for a bumblebee’s anatomy to enable it to fly when there’s a small swarm of them hovering around the mint outside my window. Either the answer is ‘yes’, or the theory is wrong.
With respect, sir, neither do you or I. Nevertheless, the conversation continues.
** It’s a good thing I wasn’t using it in a biological sense, then. [whew]
** Bingo!
It’s as expected as the evaporative cooling of water, which is why it’s so remarkable. People find it obvious that the distribution of the water molecules’ kinetic energy will change in predictable ways over time, yet they don’t perceive that the distribution of belief in moralityspace will alter in predictable ways over time.
“Intelligent” behavior arises from utterly nonintelligent behavior. This is simultaneously a wondrous thing, and utterly obvious.
Ethical meaning arises from ethically meaningless principles. Wondrous, and obvious.
What’s so nifty is that the companies did not seem to be explicitly aware that the bears that looked most like human infants were preferred; indeed, they didn’t even seem to be implicitly aware of it. “Cute” bears didn’t dominate the market overnight in response to a study conducted by the toy makers, who found that they sold better. The nature of the bears simply shifted over time, as the manufacturers unconsciously adapted to the parents’ unconscious preferences.
** Again, I don’t understand why you don’t understand.
** Let me rephrase that: what changes is the conception of a “desirable” teddy bear. I can almost guarantee that if I show someone a picture of a Model-T and ask them if that’s a car they’d like to own, they’d say ‘no’.
All language is metaphor. I could attempt a Zen explanation, which would manifest the concept directly instead of referencing it obliquely with words, if you’d like.
As you appeal to scientific consensus. Gee, no implied value judgments there. [sigh]
TVAA: Posts are only repeated because you continue to dance around the issues without actually answering the questions. I thought you had learned your lesson briefly, but your last post indicates that I was (sadly) mistaken.
Let’s try this again, shall we?
No, that’s not what I mean, and I can’t for the life of me understand how you could completely miss the point of my question and come up with that interpretation instead. Well, I actually can understand, but I’m trying to be charitable here and not accuse you of deliberate obfuscation.
The question is whether your theory states that a moral system must be perfect in order to survive, or whether a society with a flawed moral system can still survive. If a society permits a single “bad” act to be considered “good,” is that something that will “eventually cause it to fail”? Or can a society survive even though it incorrectly permits a single “bad” act to be considered “good”?
Let me rephrase…
You say that whether a particular society’s ethical system can be considered valid or not is determined by whether that system has allowed the society in question to survive. Now, forgetting all the other side issues such as how long a society must survive in order to qualify as being successful, my question is how this theory allows one to decide whether a particular part of a society’s ethical system is valid or not. The ethical system as a whole may be valid in the sense that it allows the society to survive, but does this necessaily mean that every part of the system is valid? Or is it possible for an ethical system to permit survival while still being flawed? And, if so, how does one tell which are the flawed bits and which are the correct bits?
Say you have a society whose ethical system says that murder and thievery are bad, but slavery and adultury are good. This society survives long enough for you to consider it an evolutionary success. Does your theory therefore state that, as prcticed by this particular society, slavery and adultury are “good” acts because the society has survived? Or is it possible that the society has survived in spite of the fact that its ethical system condones slavery and adultury? If the latter, how can you judge which particular acts are good or bad? And if you can’t judge the moral value of particular acts with any degree of certainty or reliability, what is the utility of your theory?
There you go with the whole answering a question with a question bit again. You yourself said that the evolutionary-based “unchanging and absolute standard” that determines the validity of moral systems is “existence.” Now, maybe you meant that it’s the existence of the “moral system” that determines its validity, but earlier you talked about the existence of the society that holds a particular moral system, and I didn’t think you were contradicting yourself here.
Regardless, what does your question have to do with anything? A society no more exist with a single member than does a species. If you meant prospers and continues instead of exists, you shouldn’t have said “exists” in the first place. Regardless (once again), your argument leads to the conclusion that any society that currently exists (and feel free to redefine “exists” so that it includes the concepts of “prospers” and “continues” and “look very much like it will continue existing for the forseeable future” if it makes you happy) must be considered evolutionarily sucessful. Or, at the very least, that society’s moral system must be so considered. Which one again leads me to whether all parts of that moral system must necessarily be valid, or can some parts be invalid, and how does one tell which parts are which?
So it doesn’t matter the particular circumstances of the society that holds a particular ethical system? Funny, I could swear that’s not what you said a few pages back…
Really? Care to tell us what those societies are? Just idle curiosity, mind you…
Ah, an answer of sorts (finally)! So, the “best” way is impractical due to lack of time and resources (I’m not sure what “resources” have to do with it, but it seems from what you’ve said that you would need infinite time to tell for sure whether a society is “successful” or not). Instead, we’re forced to resort to “logic and minor experimentation.” And how accurate would you say this method is? More than 50% accurate? Less? Accurate enough to be of any practical use whatsoever? [Most “absolute moralists,” BTW, tend to believe that their theory of morality can pratically be used 100% of the time with 100% accuracy. Weird, I know, but then, that’s why I don’t ascribe to moral absolutism].
And remember – we wouldn’t have to keep asking the same questions over and over again if you would just answer them in the first place without constantly redeining terms, answering questions with questions, or making complete non sequitors…
** How in the world should I know? How can anyone know, without running the experiment and finding out what happens?
As to why it’s bad: creatures, societies, and memetic patterns that do that have a nasty tendency to cease existing.
Um… that we need to use logic and experimentation to choose between them, while remaining aware that our judgments do not necessarily match those made by the universe?
Your theory tells us absolutely nothing about the plurality of moral systems and selecting between them! Nada. Zip. Bupkis. Not an electronic sausage. Which is, of course, why it’s a completely impractical theory and why, on the whole, it’s a bloody useless waste of time to even be discussing it in the first place.
So, what do I win?
Good night, folks…
Barry
P.S. Next up is the question as to whether man is, in fact, controlled by evolution or whether the fact that he has intelligence sets him apart from all the other “organisms” out there that TVAA likes to constantly make reference to. What about the fact that man is able to change his environment to suit his needs rather than being forced to adapt to his environment like other “organisms”?
** Fascinating. I’ve been thinking rather the same thing, only with you instead of me and my answers instead of your questions.
** Perfect in order to survive for how long?
No organism is adapted to every possible contingency, only those circumstances that exerted selection pressures on the population and that the available genetic diversity could produce a solutioin for. In other words, no creature is perfect. As a result, creatures die constantly. Does evolutionary theory imply that no imperfect creature can survive for any length of time? ‘No’, you say? Then why do you ask me if my theory states that a moral system must be perfect in order to survive for any length of time?!
It’s the same theory, folks. It’s just the general case instead of the specific.
** Respectively: Yes. Yes, for limited time, until the society faces a situation its code produces the wrong response for and is destroyed. By seeing if they work in other systems; if all systems including principle X fail horribly, principle X probably doesn’t lend itself to the survival of moral systems.
** I’d take a look at other successful societies, and see if any of them held similar beliefs. I’d look at what happened to societies that thought murder, thievery, slavery, and adultery are bad. I’d try to analyze how the society survived, and attempt to understand how its values affected its survival.
Basically, I’d undergo the same process everyone uses to learn things about the world: the scientific method, or a primitive version of it.
** No, existence for infinite time. Everything else is just an approximation.
** But what standards determine “prosperity”? Having lots of things that guarantee survival and continued existence?
** I’m not aware of any societies that have achieved any kind of evolutionary equilibrium. Ancient human societies (the famed hunter-gatherers) certainly did; it would be useful to look at their societies in order to learn what lifestyles humans evolved to be best in.
Besides that, we have to resort to logic.
** No, the circumstances determine what the correct behavior is, not the ethical system.
** Ancient Egypt is always a fascinating study. It lasted so long, yet it really changed so little…
** Accurate enough for use; indeed, it’s the most accurate method to be produced so far. (But how do I determine this? By seeing what works…) Otherwise, how accurate is the scientific method? More than 50% accurate? Less?
But my answers are usually contained in them, like the story told by Lao-Tzu to the anxious official.
Given that:
[list=a][li]The purpose of this thread was to provide possible justifications for moral relativism,[/li]
[li]TVAA steadfastly refuses to admit that his theory of evolution-based absolute morality is, in fact, a form of moral relativism,[/li]
[li]TVAA apparently cannot explain how his theory has any practical real-world use whatseover when it comes to actually making moral judgments or selecting between multiple moral systems, and[/li]
[li]Nobody else apparently agrees with TVAA’s theory or, for that matter, thinks that it even states what he keeps claiming it states[/list][/li]I propose that we therefore table any further discussion of this theory and get back to discussing whether moral relativism can be justified or not.
Because you were the one who made the value judgment ‘bad’ in the statement, “As mentioned before, sacrificing the future to benefit the present is a pretty bad idea.”
LOL godzillatemple, but he has explained, you see, it is just like sand! And this poem. It’s all quite clear, really, so clear that it cannot even be stated in the logical form TVAA insists the entire universe and all moral structures follow.
Whether or not I agree with evolutionary ethics, and I don’t, I do think it is clear at this point that it hasn’t offered us an escape from relativism. Even if we assume by hypothesis that a privileged system exists, as TVAA seems to indicate he does, the problem of plural moral systems remains and we cannot, in fact, gain access to this privileged system, so the point is rather moot.
I certainly can’t explain how my theory is workable when, as far as I can determine, none of you even understand what it is yet.
The available evidence suggests that the adoption of such a strategy has a net negative payoff. Additionally, logic suggests that such a strategy isn’t such a great idea. Other than that, how should I know? Perhaps it actually works really well, and my data is just too limited or full of errors for me to tell.
For the sake of argument, let’s accept that morality is determined by a cosmic opinion poll.
We quickly note that there are principles that affect what things are considered to be moral: specifically, basic evolutionary principles.
We note that preferred systems of morality are “evaluated” by their ability to persist.
The principles that determine which moral strategies are “best” are inherent in the nature of the universe.
Ergo, there are objective and unchanging standards by which systems of morality can be said to be judged; in other words, an ultimate “morality” not dependent on opinion.
Let me state my theory as simply as reasonably possible.
My theory is not that ‘good’ means “conducive to survival”.
My theory is that, as time progresses to infinity, opinions about the nature of goodness will converge on that which is conducive to survival.
Therefore, that which is conducive to surival is true goodness, just as the ratio of heads to tails in an infinite number of coin tosses is the true bias of a coin.
Thus, I am claiming that morality is ultimately objective, universal, and unchanging; it is by definition a morally absolute claim.
A morally relative claim would be that morality itself can change, not just claims about morality. That is the exact opposite of what I’m suggesting here. Ergo, I am necessarily claiming that moral relativism is wrong.
I have never denied that “there are objective standards by which systems of morality can be said to be judged”, but that “it is ‘best’ to equate ‘good’ with ‘survives’” (I mean lets not beg the question or anything) and that if we do equate ‘good’ with ‘survives’ that we are guaranteed a single, privileged system.
There is a significant difference between the two claims.
The first is an unfounded assertion – a proposal for a moral axiom. It’s an opinion, nothing more.
The second is a claim about an objective reality – whether it’s true or false, it’s inarguably a statement of fact.
Which is the whole point of this mess: I am specifically claiming that morality is objective.
Furthermore, I don’t understand the claims about what moral relativism is. If it’s not an aspect of moral subjectivism, then isn’t it necessarily objective? What middle ground is there between subjective and objective?
Mayhap, but you seem to be the only one whose faculties of reasoning are subtle enough to perceive such a distinction.
Yes, but it’s an “objective” morality that cannot actually be used to make moral judgments without first gathering emperical data over infintiely long periods of time. Your theory, even if true, is wholly useless when it comes to making moral judgments on a day-to-day basis in real world situations.
BTW, I forgot to ask this earlier, but with all your analogies to various “organisms” and what wolves do when faced with starvation, do you believe that the concept of “morality” applies to non-sentient beings in the first place? Are wolves making a “moral decision” when they decide to feed their offspring or not? If so, then I would suggest that you have defined the term “morality” in such a way that it contradicts common usage, and that may explain why nobody else here can grasp the “obviousness” of your theory.
And, failing that, feel free to address the questions I raised last night in my penultimate post. To wit:
** Possibly. I prefer to think that I’m just more sensitive to certain linguistic concerns than the average person.
Pffh. The same problem exists with all scientific theories about the nature of the world.
First of all, if it’s universal, doesn’t it necessarily apply to everything?
Secondly, I dispute your implication that most human beings make their “moral decisions” through careful, reasoned thought and analysis. As far as I can determine, people tend to make their decisions based on instinct, custom, and “what everyone else is doing”. Rare is the person who approaches such problems logically.
Thirdly, if we accept (as you seem to imply) that morality holds only for “sentient” creatures, what criteria define sentience?
** Lots of creatures can change their environment to suit their needs. Admittedly, humans are “better” at this than most creatures.
Humans are intelligent? By what standard? Can you actually take into consideration all of the different ways and methods of responding usefully to the environment and make a case that humans are the most intelligent of all creatures, which I’m presuming you intend to imply? Even so, what level of intelligence is required to be bound to morality? Why are you suggesting that morality isn’t universal? What about human being who don’t have the prerequisite level of intelligence?