Taking a stand on moral relativism

Ah, so I see that you aren’t that familiar with these theories after all. I’m surprised, but at least it does explain a lot.

Barry

Information theory places limits on the degree to which we can gain accurate knowledge about any process that takes place in the universe.

Quantum mechanics places similar limits on the accuracy of our observations. It doesn’t allow uncertainty in its results, though – merely the application of those results to what we see. (There are interpretations of QM that take non-determinism for granted, but they’re equilvalent to the interpretations that rule it out – if the true nature of things is unknown, it doesn’t matter whether something is the result of chance or causality. How could we tell the difference?)

And chaos theory retains awareness that uncertainty exists in our observations. It doesn’t claim that the universe isn’t causal.

You can. AFAIK evolutionary fitness is a non-starter. There is no way to appeal to survival to say that cockroaches, which exist, survive better than humans, which exist. Neither does it tell us that it is better to be a murderer, which exists, than to not be a murderer, which also exists. However, this latter question, you might notice, is a matter humans concern themselves with in the study of morality.

That you feel like you’re telling me something I haven’t been stating for eight pages is disturbing.

Without a standard for validity, the description is meaningless. They are not equally valid, or equally invalid, because there is nothing to describe such a quality. That we may create one is obvious. If we choose to, then proceed to question its validity, we are back where we started. Relativism thus addresses this issue of infinite justification by saying, "It cannot be done. At some point, whatever that point or points may be, we stop justifying things. Where we stop and why we stop is a function of a particular methodology, theory, conception, system, function of something else, etc. Relativism doesn’t tell us when or why we stop. It only recognizes that we do, and that we do so in different places.

I am sorry you feel this way.

It is a definition, TVAA, not an entire moral system. Systems will normally be comprised of several such definitions and tools for deduction, perhaps induction, and other logics of its own choosing. I did not present it as a system. I noted it was a definition, and I noted that all definitions can be stated tautologically.

We’ve indicated the use of a term. “Good” means “what people should do”. That we have not fully elaborated on an entire moral system doesn’t mean that defining necessary terms isn’t getting anywhere. I frankly grant you more sense than to really think so. It takes more than one definition to construct a geometry. It would be a little silly to argue that after defining “point” or “line” that we should have a complete system, or even a system.

epolo, welcome.

Well, there can be simpler cases. This construction already assumes that responses are always contextual. While I think that is a reasonable assumption, it is not undeniable.

AFAICT, nothing. That is not really surprising, though. The model you presented is quite analogous to computational processes. Its acceptance has implications like, “There exists a rule for determining moral behavior”, which may be the case, but, depending on the character of the operations performed, might never yield a result in a finite amount of time. While TVAA seems quite content with this result, most people turn to morality and discussions thereof to reach a decision, largely because, for various reasons, we find a decision necessary and cannot wait an infinite amount of time, or perhaps a finite amount of time that is still longer than we have. So computational models might not appeal to all parties.

I’m not sure I understand your fourth question, but its answer would depend on many factors, the least of which is how we would proceed to evaluate that circumstance. The same really goes for your third question. These are questions that specific moral systems answer, not relativism which is a perspective on justifications for moral systems.

In a very strict sense, yes. Gravity forces them down, the force that keeps water formed into droplets effect them, heat causes movement, etc.

However, the atoms in the cells of the brain are much more than just cells acting independently. They work together to cause things like, thinking for example, or autonomous responses like breathing and heartbeats. Which in turn power muscles, giving brain cell atoms a distinct advantage to water puddle atoms.

Which really has nothing to do with morality at all. Morality is a human concept, water puddles aren’t human, and I doubt they have an ethical code they follow.

(I can’t believe I am even having to post this)

Welcome to the club, Epimetheus. :slight_smile: Be prepared for the response, “But they are all subject to the same laws of physics so they’re really all the same!” argument. Holds as much weight as “all pictures on the same wall are the same” argument, but hey, I don’t think that will stop him.

Wow. You really believe that everything is deterministic and that choice and free will is merely an illusion, eh? How unusual.

We could go back and forth on this all day long, but as I said before I am not qualified to critique your assertions regarding quantum mechanics. I can only say that yours is a belief that I have rarely heard expressed elsewhere and is in no way “obvious”.

Regardless, even if your assertion is true, it still doesn’t solve all the other problems with your argument that have been mentioned over the last 5 or so pages. It doesn’t justify applying one subset of natural laws to situations that are not goverened by that particular subset (i.e., not everything is about “survival,” regardless of how strenuously you assert that it is). You’ve said that the “laws of evolution” govern all things, including puddles and people, and that the sole purpose of morality is therefore to provide for the survival of the species. I say, however, that this is a ridiculous statement not supported by your underlying premises. You have made a leap of logic that everyone else here but you can see is unwarranted, counterintuitive, and just plain wrong.

Also, while your theory may provide an explanation of why we have the moral systems that we do, it cannot provide a practical method for evaluating different moral systems, regardless of how strenuously you assert that it does. And it doesn’t provide a practical method for making moral choices, since you have said that choice is illusory in the first place.

If you like, feel free to address these flaws once again using the same aruments that you have used in the past. And, once again, we will point out the flaws in your attempted explanations the same way we have done so in the past. It’s really not getting you anywhere, however.

Barry

The properties of the sum of the parts are not equal to the sum of the properties of the parts.

Wherever you go, there you are.

** More specifically: choice is either illusionary in a determinstic universe, or it’s utterly arbitrary and therefore meaningless.

** Let’s put it this way: no one follows the Shaker teachings any more. There’s a reason for that.

Which is not to say, you’ve created an explanation that you feel holds due to assumptions you’ve made. I’ll let you guess which one of the two resonates more strongly with me.

Huh?

Dude, you give me an epsilon greater than zero, and I will supply you with a computational model that describes the motion of three bodies under Newtonian physics to a degree of accuracy within epsilon.

You can’t do that with Quantum physics.

The comparison is completely invalid.

(I think I grok what you were trying to say, but, as with raindrops, you just fumbled into a really bad example.)

Trinopus

More of a welcome back, really, but thanks just the same.

Damn, I was hoping that it didn’t assume that.
And I’m not sure that it really does. What about the simplistic case where M returns the same ordered c’ for all s?
Come to think of it, since c could really be the set of all possible states for the universe, it need not be an argument to M. I’m still a little unsatisfied by this because it has no requirement for a moral actor. But perhaps that’s best considering TVAA’s concept of morality doesn’t seem to require one. Which is what my question (1) was about.

Right, I’m not surprised by your response either. I think that if a moral system is defined this way it becomes quite clear that a moral system can say nothing about another moral system in the abstract because M is not a situation that can be responded to. However, the decision to choose a moral system falls within the range of M and can only be morally evaluated from the context of some M. Hence moral relativity.
I don’t think that TVAA is saying that M itself is necessarily non-terminating, but that trying to evaluate the range of possible moral systems is a well defined but non-terminating problem.

Well, the fourth question was a (lame, apparently) joke. It’s a line from the Bob Marley song “One Love”. The third question was really directed more at TVAA.

Oh, and I forgot one more question (again, really for TVAA):
5) Current theories about the expansion of the universe suggest that eventually the universe may reach a point where it is so stretched out that no interactions of any kind are possible. If such a state is inevitable, does that mean that all possible systems of morality are eventually doomed to failure and therefore “wrongness”? Would that make us all sinners?

Seems a little nitpicky, but there is in fact still an active Shaker community .

Now, now – no need to let facts ruin a good theory…

However, even if TVAA were right, and (a) the society that held Shaker beliefs has ceased to exist, (b) their lack of existence can be traced directly to their belief system (or, more precisely, that the fact that the society no longer exists somehow “proves” that their belief system was bad)…

This only provides evidence for the validity of TVAA’s theory if one were to assume that the purpose of morality is to provide for the survival of a society in the first place. If you were to assume, on the other hand, that the purpose of morality is, say, to provide for the greatest amount of happiness, then one could conclude that the Shakers represented a successful moral society in spite of the fact that they are no longer with us.

Sound familiar? It should, since we’ve covered this ground at least 10 times so far. And round and round we go…

Barry

Well, I was sort of naively assuming that all variables were useful in M(c, s). And in fact, that was what I had in mind when I said, “There can be simpler cases”, ie ones that always yield one result. Such as my wacky “all moral propositions that end in vowels are bad.”

Nope, it doesn’t need to be at all in the ideal case. The underlying routines could even sort the list in a manner that removed impossible responses from being spit out. But whatever, this becomes more of a computational issue and matter of implimentation than work towards understanding this perspective.

Well, yes and no. Who needs to know would likely be a part of situation s, and what choices are possible might be able to be determined from that. But even here we start running into more halting problems. The input s would need to contain as much information as is necessary, and if we’re dealing with the entire state of the universe at all times then we’re going to require a computational device that is larger than the existing universe. This is, of course, why TVAA feels that the universe determines morality, and not us, because he feels that if we were to try and reason it out completely we’d reach fundamental limitations, and besides, the universe is already figuring it out for us. Which is teleological, but don’t tell him that. :wink:

Exactly. The rules of the computational system are not guaranteed to be the same. Different rules operating on {c, s} will yield different results. In fact, this illustrates why moral relativism does not say that moral system M cannot judge whether another system is good or bad, since as you note, if the situation s is “which moral system should I choose?” then it is (ideally) able to solve this problem. But it is also obvious that in asking this system this question and relying on the answer that we’re already assuming a privileged system, namely the one we are consulting. Nothing forbids us from instead asking M[sub]2[/sub] the same question regarding our oringinal M.

I agree that this is so, once we define morality is specific ways. That is the point of constructing moral systems, so things like goodness can be well-defined, or as well-defined as the system needs.

And if it is welcome back, then welcome back. These long debates are hell on the noggin! :smiley:

** The three-body problem has no precise solution: it can only be solved by numerical methods. Quantum physics actually has similar problems; the interactions of a few basic wavefunctions form (relatively) easy-to-solve equations, but not for multiple wavefunctions. Anything more complicated than a hydrogen atom is a nightmare, and we can solve the hydrogen atom mostly because we make a lot of convenient assumptions.

Anyway, simulations of the three body problem are rarely the same twice, because of errors introduced in the numerical methods, and under the right circumstances they can behave chaotically. Admittedly, some orbits are much more stable than others, but for certain unstable configurations our models are essentially useless. The errors from our methods, our observations, and from external influences on the systems themselves make it impossible to determine whether an asteriod will remain in the solar system or be ejected, for example.

Still, I’ve been wrong before. Feel free to correct me when necessary.

There’s a fascinating discussion of the problem – with a working Applet that demonstrates some aspects of orbital instability – here Anyway.

Ooh. Take a look at this:The Shaking Quakers There seems to be one last group: ten survivors. It’ll be interesting to see whether it manages to revive or not.

Several sites state that no new recruits are being accepted, so I rather doubt it. [sigh]

** From your perspective? You’d probably see it as wrong – you want to go on living, after all. Meteor – doesn’t really have any desires, as far as I can determine. Disinterested third party? Is there really such a thing? Absolute morality? Well, obviously you were killable, so I suppose the universe approves.

** Um. It’s not bad, but I think information theory has done this before you.

** Assuming it would actually work? Presumably. At least, as entropy slowly consumes the stars, the people who took up the offer would be the only things left to have much of an opinion on the matter.

This, of course, assumes that the universe actually permits such a solution. It might be that rejecting the offer is the correct strategy – who can tell? Sometimes it’s the best strategy to accept short-term deprivation in favor of long-term benefit.

Let’s say that the scenario would actually work. Then the moral system of the individual who thought it was a good deal would spread and persist. Of course it’s a good thing!

Consider the computer simulation I spoke of pages ago. What do the creatures think is the right way to do things at the beginning? What do they think at the end? Who has the correct solution? (And what standard defines the correctness?)

It has been noted that evolution doesn’t necessarily produce happy things, it produces things that are good at remaining present in the universe.

I believe that position would be “religious leader”.

Regarding erislover’s last post: that’s pretty close to my position, yes.

It seems we differ on two issues, now: whether the behavior of the universe should actually be considered teleological (I reassert that teleology is used to indicate goals and purposes built into situations, while evolutionary morality holds that goals arise from the purposeless change of the cosmos) and whether the universe can be considered a privileged system (well, it is the ultimate system, how can anything be as or more privileged?!).