Resolved for Debate: Some things exist which do not exhibit empirical signs

From here.

I said:

I am not such a person.

I have seen that some (not all) of the atheists on this board believe Christians and Christianity get a free ‘bye’ on this board to make theistic assertions that strike them (the atheists in question) as the epitome of the ignorance we’re supposed to be fighting, and (they, the atheists in question, beliieve that) then they (the atheists in question) get remonstrated, officially by mod or informally by pile-on, for attacking Christians who haven’t claimed to have made provable, falsifiable assertions; they’re just saying this is what they believe, not that it can be shown (proven) that they (the christians in question) are right and the atheistis wrong.

And it is my observation that the focus of the acrimonious disagreement is often the larger meta-question of whether anything can be said to exist if it doesn’t exhibit any qualities that can be used to formulate falsifiable statements. That the atheists in question (or some of them at any rate) take it for granted that of course it is meaningless to discuss something as real if no falsifiable statements can be generated that are contingent upon some aspect of that thing’s existence! From their standpoint, any theistic person is exhibiting ignorance in need of stamping out whenever they speak of harboring such a belief, just as much so (if not moreso) than a truebeliever who claims to have PROOF that such-and-such a theistic belief is true because “it says so in the Bible” or “God said so to Noah” or “It was determined to be so at the Council of Trent” or etc etc.

Meanwhile, the Christians and also the nonchristian defenders thereof (including many of the board’s other atheists) do not see it that way; an assertion of proof (or empirically substantiated claim, if you will) is to them a different thing from an assertion of belief, and they do not consider it to be an act or trait of ignorance to consider something to truly exist merely because it exhibits no empirical qualities.
Let’s get started, shall we?
First off, like the prescientific villagers of centuries past who thought it self-evident that the sun goes around the earth, the radical empiricists have a relationship understood backwards of how it actually works:

Empirical evidence is centrally the evidence of the senses; the aspects of a thing that you can see, smell, touch, taste, and hear. Add to that the spectrum of similar data we can obtain from the measuring and testing apparatuses we’ve invented over time to extend our reach (which translate other inputs into human-accessible readouts that we can ascertain with our original five senses) and there you have it: empirical data.

But it isn’t “just there”. It has to be perceived. And recognition has to occur. To make sense of it, it has to be categorized.

One can say “Maple leaves are red”. One can hold a leaf in one’s hand and make visual determination that the leaf is red —we know that, we’ve done it — but how? There is no homonculus running around in little offices inside the brain doing meta-empiricist tests to determine that the color of the leaf is indeed of the category of color-appearances that the mind maintains for itself called “red”. It is not, in fact, a rational process at all; it’s not a process that we’re able to point to as a series of steps. Instead, it is an emotional process. Some folks would say you “just know”, others call it “intuition” or “instinct”, but what it is is an emotional process. And without that central reliance upon what feels right — that intuitive emotional little “aha” of pattern-matching — thinking could not occur. {Note: these aren’t cites, they are prior posts of mine elaborating on this perspective}

The empirical and the rational, in other words, is a specialized subset of the emotional-driven process of leaping to conclusions because they sit will with the rest of what we think and believe. NOT AN ALTERNATIVE, NOT A NICE CLEAN LABORATORY ENVIRONMENT SCOURED CLEAN OF SUCH THINGS, but a subset which, in the close-up and small aspects of its functioning, continues to rely upon them and could not function without them.
So that’s for starters: there is no knowledge that is not faith-based. It’s just that some knowledge is meticulously subjected to a formal process of testing as many of those leaps as we are able to test, and, wherever possible, to build only on a solid foundation of information that has already been subjected to those processes. Come to think of it, the word “just” doesn’t normally have a useful function in that sentence — that little “just” allowed us to put human footprints on the moon and to know with a surprising degree of precision what happened on this planet in the millions and billions of years before there were humans. And let’s not neglect to mention that progress in that direction took place only at the expense of superstitious beliefs, where “superstitious beliefs” were beliefs that were build on the questionable foundation of things “known to be true” but never tested, and which were themselves never tested for verification.

But for purposes of this debate, “just” is appropriate. The things we know, we still know, ultimately, as a consequence of a mental process that is akin to puzzle-piece matching — gradually building up a large pattern of smaller patterns, and looking at each piece of new sensory data and intuitively fitting it into a place where it fits well because it feels right to put it there.

Now, having put that on the board, let’s consider that great big pattern, the “map of reality” we carry about in our heads, the larger pattern that we’re constantly fitting the new experiences into and recognizing things as part of. You ever worked a puzzle? If you have, you are entirely familiar with the phenomenon of the hypothetical piece. It goes right there. Haven’t seen it yet, but you know what it’s shaped like, you know what colors it’s going to have on it. Like any analogy, this one may fall short at some point, but it will do for starters: we do posit the existence of things for which we have no evidence per se because of how the overall pattern of reality makes sense if there is something of that nature.
Many such things, once thought of in this fashion, become named abstractions, and without us ever having empirical evidence of them, their existence becomes a part of our model of reality.

Character traits — “stubbornness”, “idealism”, “compassion”; Principles — “justice”, “freedom”, “fairness”; Emotional experiences — “despondency”, “anticipation”, “nostalgia”… some of these things may at this point be associated, albeit loosely and
in a fashion dependent on term-definitions, with measurable objective phenomena, but for the most part little to none of it was when these things were first given names.

We believe in abstractions; we utilize them in our mental language of the world. We don’t expect them to leave empirical “footprints” because they do not name concrete objects, but they are nevertheless real to us.
The scientific method is great for those areas of life amenable to what it can usefully investigate.

Once there was a patrolman walking his beat and he came upon a businessman on hands and knees next to the streetlight and, thinking the man to be a drunk, came closer only to be told by the businessman “I’m looking for my keys, I dropped my damn keys”. Police officer brings out the maglite and switches the beam on and helps look for a moment, sees nothing, and says, “Are you sure this is where you dropped them?” Businessman says, “No, I dropped them back yonder in that dark alley, but it’s too dark to look for anything back there”.

In the science departments of our universities, the curiosities of graduate students are diverted from what they are interested in to what can be usefully approached for empirical study with the tools we’ve got. Many things of great interest to us as a species are not studied by our sciences because they are in the dark alleys; the matters of inquiry are things that, at least as we’re able to formulate our thinking about them at this time, do not manifest with any empirical qualities.

That last little disclaimer is an important one, I think —in time, things that are abstractions to us now may be understood in ways that tie them more directly to specific matter-and-energy phenomena, empirically discernable things, quantifiables even.

That disclaimer is not one I’m ready or willing to posit as a gateway threshold requirement for the real, though: some of the things that are abstractions to us now may never, ever, be anything but.

And in either case, they are real for us now, and a radical empiricist approach to metaphysics and epistemology is not useful for understanding our world or dispelling its ignorance.

And if you push it, everything you can hold in its grasp crumbles away until you’ve got nothing to hold onto to call real and you’re left with ‘there ain’t no there there’.

Too many notes.

Agreed.

Also agreed.

Light of a certain wavelength is reflected off the leaf and stimulates our optical nerves so that a certain series of reactions takes place in our brains. Through verbal and nonverbal communication we have learned that a stimulus that provokes this series of reactions has a quality that we call “red”.

But we don’t “just know”. We’ve learned it. We can test it empirically, too. All we need to do is define “red”, and from that point on it’s easy.

I will agree with you inasmuch as there is no way that I can be sure that Descartes’s demon isn’t fooling me and no way to know that I’m not a brain in a jar and no way to know that I’m not trapped in the Matrix, or whatever. So sure, on that level, all knowledge is ultimately faith-based.

However, if we do (as I do) accept that there is no way to know differently and simply assume that the world is as it seems until there is positive reason to change that assumption, then the distinction between knowledge and belief becomes not only meaningful, but crucial. If we do not make that assumption, then it seems to me that our existence itself becomes meaningless, as it is literally impossible to know anything, and we might as well curl up in a ball and die.

But I have experienced all those things. I have been stubborn and dealt with stubborn people. I have been compassionate and met compassionate people. I have suffered from nostalgia (and I anticipate to do so again). They’re not made up.

I have one question, the answer to which (if I can understand it) would, I suspect, render the rest of the debate moot, at least as far as I am concerned.

I assume that we agree (please correct me if I am wrong, but if I am I totally misread a large portion of your post) that there are some things that do exhibit empirical signs, for which an empirical approach is appropriate, that should be tested through scientific means, and that you would accept or reject based on the outcome of scientific examinations.

You say that there are things outside that group. What is the crucial distinction between those two groups? What is it about the things that are not inside the first group, that makes them unsuitable for inclusion there?

Thanks for starting this thread, by the way.

AHunter3, how can you argue that “some things exist” at all when you have previously released the thermonuclear debate-ender that nothing exists? That everything is an illusion.

And what kind of debate can we have here without any ground rules? I mean, we can’t just assume rationality as a starting point, right? How should we decide which positions are valid and which are fresh out of some (imaginary) ass? After all, if we examine any position sufficiently it’ll crumble down to assumptions (which are imaginary too). Then what will we have accomplished?

Just a note that this ‘radical empircism’ of which you speak has a name: Scientism

It’s another ideological system which has its own weaknesses. Which can’t be proven empirically :smiley:

Priceguy:

“Learning” is a process of engaging in pattern recognition. It’s how we acquire language and everything subsequent. We compare current emotional and sensory input to patterns we’re familiar with and we get that “aha” of recognition when we recognize it. From that we build.

Not only is recognizing the leaf as “red” dependent on that little emotional-intuitive leap, but so is the final step HERE:

a) All red leaves are maple leaves
b) This leaf is red
c) Step b is indeed sufficient reason to conclude that this leaf is a mape leaf based on Step a.

In other words, despite the fact that A and B lead, in plain old logic, to the conclusion “therefore, this is a maple leaf”, there’s an additional (generally unnoticed) moment of necessary pattern-recognition that unpacks something like this:

There is a pattern of conclusion-making, consisting of the simultaneous occurrence of both step A and step B, and a pattern of prior conclusions reached, consisting of times that Step A and Step B have been applicably present and where I chose Step C as the outcome and things happened subsequently as an outcome of all that.

Does this “fit” the pattern whereby Step A and Step B are both applicable, is this “one of those”?

And is there anything in the overall pattern of other times it’s seemed that way compared to what’s going on now that would make me think otherwise?

More pattern recognition. We make sense of things enough to recognize “red” when we see it, and to recognize language and its terminologies when we hear it. We may be hard-wired for language (as opposed to patterns of flashing ultraviolet light as a communications modality, let’s say) in a generic sense, but that’s not relevant here.

I’ll give that question more thought. The easy answer (which may be sufficient) is that most aspects of reality that we have any cause to concern ourselves with are things which manifest themselves in ways we can see, hear, etc etc; and the overwhelming majority of the rest of them do so in ways we can sense with devices that can see where we can’t see, etc, extending our reach.

Almost all of what is left — comprised entirely of stuff that does affect us and yet doesn’t impinge on our senses or any of the technological extensions thereof —affect us conceptually, manifesting as aspects of how we think about things and organize our own thoughts.

You can’t operationalize “cruelty”. You can operationalize a set of highly specific behaviors and code for whether they are exhibited, but people will say those are likely manifestions of cruelty but are not cruelty itself, that the manifestations can occur in the absence of cruelty and vice versa, and furthermore that if you had to use machines rather than human observers to code whether or not the designated “cruel behavior” had occurred you’d see how sloppy an operationalized variable it really is (i.e, you’re depending on your human observers to know “cruelty” already). Or you could operationalize certain neurochemical states and/or certain firing patterns in the brain, but again people will say that those are likely correlates of cruelty but are not cruelty itself, that what you’ve operationalized may be one but not the sole neurological precursor of cruel behaviors, or that cruelty often causes but does not always cause the signals you’re observing, and that if you had to use machines alone to code for “cruelty” you’d find it insufficiently distinguishable from aggression and other states of emotional attitude.

We operationalize cruelty through a combination of observed behavior in others, observed behavior in ourselves, and awareness of our own intentions and feelings when we have exhibited those behaviors, and by comparison with others’ experiences of their own feelings and experiences and whatnot, a sense of sufficient universality to get a sense of “Aha, I’m going to posit an abstraction here”.

This is not to say the Milgram experiment and the Rosenhan experiment cannot and did not tell us more about cruelty, but they don’t directly study it. The term isn’t operationalized. A person who, for some reason, did not find “cruelty” to be a useful concept or term could still follow along and see tangible evidence being studied. I don’t know if they’d find the studies interesting in the same ways for the same reasons, but they’d find them viable studies, I presume. They might have a concept similar to the concept of “cruelty” and yet not recognize that such a similarity exists. Or (since we’re being hypothetical anyhow) the concept could be absolutely foreign to them and their experience of humans and human motivations. Either way, you could imagine them getting into a protracted argument with others about whether or not "cruelty " is a nonreal phenomenon. And they could say “show me some evidence, some claim that depends on the existence of ‘cruelty’, a falsifiable claim, otherwise don’t speak to me of ‘cruelty’, you’re being irrational”. And every behavior suggested (which the folks who believe in cruelty do not themselves consider to be either necessary or sufficient for the presence of cruelty, remember?) could be rejected by saying “But that’s not your alleged ‘cruelty’, that’s dominance-aggression plain and simple!”, or whatever… see?

Yes. And this is entirely an empirical process. We see “red”, we learn to recognise “red” and distinguish it from “green”, “blue” and all the other colours, we learn that it is called “red”. It’s all empirical.

I have read this part several times, and I’m afraid I just do not understand what it is you’re asking. The pattern of conclusion-making that you mentioned is based on previous experience - empirical evidence.

Why do you consider pattern recognition to be non-empirical?

I claim that there is nothing that affects us, yet does in no way impinge on our senses or any of the technological extensions thereof. To use your example of cruelty, the cruelty does not exist. Cruelty is simply a shorthand, a word invented to refer to a subset of actions, thoughts and words that have certain qualities and certain consequences, and arise out of certain circumstances, that we have seen fit to group under the header “cruelty”. Cruelty doesn’t affect us, but those actions, thoughts and words do.

Some people say opponents must prove things but they themselves needn’t. Seems self-serving at best, contradictory on the average, and shifty at worst.

He calls it “Quality”, I refer to it as an emotional experiencing, but either way recognition is a verb. You must first categorize what’s happening in your visual cortex as sufficiently reminiscent of prior experiences you call “tree” — sufficient “treeness”, if you will —before you can know it’s a tree. And long before you get to that point, you have to do a similar process of recognizing before you can even recognize that the thing is GREEN (rather than red, in this case). Pirsig is right: we do it so fast we tend to think of it as automatic, intrinsic. I know it’s red because it’s got the right wavelength and that wavelength is red (or green or whatever). But that’s not true: first you’ve got to feel it. Experience it. Visual sensory sensuality. Then you feel, in the emotional rather than the sensational sense, the intuitiion that says “I recognize this experience! I’m SEEING something! And I know that color! It’s GREEN!”

It doesn’t just happen.

I’m with you right up until the moment you use words like “emotional” and “intuition”. There is no emotion in the act of seeing green. There is no intuition required to see green. Green is there, and you see it. It does just happen.

What Pirsig is trying to say with the time lag, I’m not grasping.

What’s the difference between something which exists, but does not “exhibit empirical signs” and something which does not exist? If we could tell a difference, then that would an “empirical sign”.

Emotion is at the core of all experience. The sensation is the “it, in itself”, the emotion is the “you, in relationship to it”. Thought is a more formal & later cognitive development, dependent on language and the complex concepts it embodies to allow us to “load” much more detailed material into our minds and hold it and elaborate on it. But creatures lacking language have cognition, they recognize things. They do it by emotionally-informed processes. So do we. What we do afterwards may involve intellectual content — things you could put into words, “thought”, etc —but it’s preceded by, informed by, and driven by an essentially emotional process.

No feelee, no thinkee.

The central factor keeping my PowerBook from making its own observations about its surroundings, generalizing about those observations, creating categories, and eventually reaching its own conclusions independent of anything that any software developer told it to do is its inability to have feelings about things.

If you had a similar lack, you’d be similarly unable to reach any inductive conclusions. To formulate a single hypothesis. To reach a conclusion based on the data. To recognize anything as anything except insofar as you were already specifically programmed to do so in a certain fashion.

Are you asking what the empirical difference is, or just the difference?

::whistles innocently::

Sounds like an issue of tolerance rather than reason - I’ve no problem with a Christian discussing aspects of his faith (though the atheists in your example seem to) within the closed circle that defines his faith. I only take issue when claims are made using the faith that step outside the faith, i.e. AIDS/Katrina/September 11 are the acts of a vengeful God, and not the result of viral evolution, weather and bad architecture, or Islamic terrorism. At best such a claim is useless and has no place in a serious rational discussion of these subjects. Similarly, using issues of faith to define secular law.

I only object to faith when faith wanders where it doesn’t belong. I don’t go into faith’s house and start shitting on the furniture, because I am a civilized human being.

Even if I understand what you’re saying, and I’m not sure, I fail to see the relevance of this. No matter how many emotions are involved in the empirical process, it remains an empirical process.

No, it’s merely a question of computing power. Are you saying that we will never have a computer that can do the above?

As interesting as this is, I’m not seeing its relevance in this debate. Do you have an answer to my earlier question about what things end up in the “science” group and what things end up in the other group?

I’m not sure what the difference between those are. If something exists, but there is no empirical evidence that it does, and if there can never be any, then it might as well not exist. But if you want to claim that it has some reaction with the phsysical (empirical) world, then it would have some “empirical sign” and would therefore violate the condition you put on it.

This is the problem I have with the so-called spiritual world. If it can act upon the physical world, then it’s part of the physical world. If it can’t, then there would be no difference if it just didn’t exist at all. It would be of no consequence to us. We couldn’t detice it and it could affect us in any way.

I’m with John Mace. Hundreds and thousands of trillions of things exist which have absolutely no reaction with the world that I’m in. So what?

Probability. It is a fact, and experiments can be performed to prove that. (You can learn a lot of math drawing to an inside straight…) But it has no seperate existence, it is a noun without an identity. It has no thingness.

(That would probably sound better in German, but some smart-ass would surely bust me for first-degree pretentious…)

Probability does exhibit empirical signs, as it (as you say) can be shown through experiments. It has as much thingness as velocity or temperature.

Velocity and temperature are empirical facts based on thingness, an object has a velocity, it has a temperature, it has no probability.