Wow, thanks for the term eliminativism! I hadn’t known others had taken these ideas seriously. Fantastic! Although I disagree with his words you quoted, I’ll check out Chalmers. The color Red for example (as someone earlier brought up) is indeed another example of something I reject. I think that no matter how complicated a machine you produce to interpret a waveform of frequency X, any subjective “color” that has no quantitative meaning is outside the domain of naturalism. I believe that either we are deceived, or that something supernatural must be brought in to explain the phenomenon.
I am too lazy to look back at my post, but if I did not put quotes around it, I should have. I suggest to err on the side of giving me the benefit of the doubt when it comes to recognizing the ironies inherent in any attempt to articulate my position. Many things I have written in this thread I’ve wanted to put quotes around, but at a point it gets ridiculous.
I’m a little confused about your discussion of color, because it seems like you are referring to the underlying cause being the wavelength of electromagnetic radiation, which indeed would not be understandable by a pre-industrial society. But the interesting things is that what corresponds to 700nm for me could be green or sparkle-pink, while it is red for you (even though we both call it red). The subjective experience of color is similar to the discussion of consciousness, in that both are outside the domain of science (in the case of consciousness it depends on the definition).
I don’t think that this works as a counterargument against the OP. Both the OP and Chalmers start from the following premise:
Creatures physically identical to us could lack qualia.
The difference is that Chalmers’s proceeds with a modus ponens, while the OP proceeds with a modus tollens. Chalmers assumes, as self-evident, the additional premise that
We have qualia.
He then concludes that
Qualia are non-physical properties.
The OP, on the hand, assumes
All properties are physical properties.
The OP then concludes that
We don’t have qualia.
All that matters for my point is that you are aware of the color in the sense that you can distinguish the red ball from a green ball. How that awareness feels to you (relative to how it feels to me) is irrelevant for the analogy.
The analogy is only designed to establish that we can talk about things even when we aren’t yet able to talk about them scientifically. We can talk about things even when our understanding of them is so poor that we can’t even identify them outside of highly restricted domains. We could talk about color before we knew about how different chemicals absorb different wavelengths. We can talk about consciousness now, even though we don’t yet know how to identify whether a given instance of information-processing is conscious. We can only identify consciousness in very familiar contexts, namely humans.
Now that we understand color scientifically, we can predict a thing’s color just from its chemical composition and environment, even if the thing is like nothing we’ve seen before. We can hope to be able to do the same thing with consciousness.
Nope. In fact, it is the fact that I know I am conscious that allows me to be religious, and to reject any paradigm that says everything can be explained by science.
Anyways, we’ve evolved to the point where we have a concept of consciousness–and the concepts is designed to apply to ourselves. We are inherently conscious, as, if we weren’t, we would never have come up with the idea.
To define yourself as not conscious, you have to first define consciousness. But, since the definition includes “A quality that humans have that other animals don’t.” The concept is unfalsifiable. To declare yourself not conscious, you have to redefine the term. But, then, at that point, no one has to accept your terminology.
Like it or not, we are not logical beings who happen to be emotional, but emotional beings who try to use logic to help deal with those emotions. Like it or not, if someone says they feel conscious, then you can’t prove them wrong. Just like you can’t prove that I am not happy, not sad, not angry, etc.
Then there’s the simple fact that, what you call the illusion of consciousness is itself consciousness. By acting as if you are conscious, you are. If your only test for the existence of a mixel produces an illusion that a mixel exists, that is no different than saying a mixel exists. The rest is semantics.
There is something different about the brain-processes to which I have verbal access. Verbal access is just one of the features of these processes, not the defining feature. There seems to be a whole suite of features possessed by the processes, though our understanding of these features is still vague at this point. But something causes these features to appear together, and that something must of course be “natural”.
Suppose again that we’re in a pre-scientific society. You have a pile of things. Some are red and some are green. You sort them into a red pile and a green pile. Then you mix the things back together and ask me to sort the red from the green. I reliably form the same two piles.
Since we are pre-scientific, we could not explain how we did this in physical terms. Suppose someone asked you, “What is the difference between these two objects that made you put them in different piles?” All you could say is “The only difference I know of is the difference in how they looked to me — the difference between the experiences I had while looking at them. I don’t know what it was about them that resulted in their giving me different experiences. I can give no non-experiential property of the objects that correlates with how we sort them.”
But now that we have a scientific understanding of color, you can point to a physical, non-experiential difference between the two things. You can say, “This object has one of the following kinds of chemical compositions <…>. The other object had one of these other kinds of compositions <…>. The list that includes the composition of a given object is the physical property that correlates with how I sort the object.”
But even prior to our achieving the scientific understanding, even when we can find no physical property that explains our sorting, there is no need to suppose that we must invoke color as a supernatural phenomenon. Likewise with consciousness.
Oh OK, I understand your color analogy now. Consciousness, however, does not allow us to sort blocks (or anything analogous). Your color analogy shows that color is a scientifically testable phenomenon with repeatable experiments of block classification possible long before a deeper understanding of the color phenomena. This is a situation similar to the general progression of scientific understanding about anything. For example first we can sort matter into “fire” “earth” “water” “air” classifications, then find eventually that it doesn’t work so well, then figure out the periodic table, then later understand quantum mechanics and the hydrogen atom, then later relativistic quantum field theory, and so on, each time being able to test our understanding scientifically, and continually finding improved theories that better match the data surrounding matter. With your block analogy, the sorting process would work 99% of the time, until you encountered someone who was color blind. Then you find you theory is deficient until you realize there is something strange and testable about the eyes of the color blind folks, so you modify your theory. Later you can take the color from the blocks and compare their spectra to the location within the spectra produced by taking white light through a prism making a rainbow. You can study how warm the blocks get when exposed to various light, and so on. Then even later you begin to understand human physiology and electromagnetism and so on. None of the above applies to consciousness, for which there appears to be no scientific applicability.
I do get the spirit of your point – that perhaps later we will find that consciousness could be brought under a scientific umbrella. I am doubtful, but that would be very interesting and surprising to me! I don’t see how it would be possible to test, even in theory.
Different, true, but different is weak support for consciousness. It is not surprising that brains that have developed language and attain theory of mind will be different.
This shows how tricky the semantics are here. My definition of “the illusion of consciousness” would be something like:
The tendency of certain complex automatons to ascribe to their existence a dubious term called ‘consciousness’ without the proviso of a cognizance of the tautological irony inherent in such a term.
The fact that the automaton ascribes a term to itself does not imply consciousness as you define it.
So, I still don’t understand what you mean when you say this kind of thing.
Consider the statement schema
Different, true, but different is weak support for X.
Some values for X make this sentence true (in the context you used), while other values make it false (agreed?). Let A be the class of values for X that make the sentence true, and let B be the class of values that make the sentence false. To make your claim, you must be assigning to the word “consciousness” a value in A, and not in B. So you are giving “consciousness” some meaning, even if it is of a very vague and diffuse sort. “Consciousness” can mean some things, but not others, as you use it. The meaning is precise enough to keep it out of B. So what is the meaning that you assign to “consciousness”?
OK, let’s go back a sec. You said:
I interpret you to be implying that some unique aspect of the human brain will conceivably be found in the future to support the ability of consciousness* to exist as a naturalistic phenomenon.
*consciousness (which I reject) defined to be as others seem to accept it. something like: the subjective experience of reflective awareness
My response is: just because there is something special about the human brain compared to less cognitively complex organisms, doesn’t lend any support whatsoever to the above definition of consciousness. Our differences with “unconscious” organisms can be explained in many ways that don’t necessitate an invocation of consciousness. But perhaps I misunderstood you, or you me, because you seem quicker than to make that sort of logical mistake.
Keep reading. The discussion of quantum mechanics starts a few posts down.
Okay, now what is this ability? Without using mere synonyms (like “subjective experience”), just what is it that you’re rejecting when you reject consciousness?
Just in case, I’ll back up a bit and recapitulate my position on what we’re talking about when we talk about consciousness. This position could be called the Materialist Consciousness Hypothesis.
We can make out, in a vague sort of way, that some of the processing in our brain has a distinctive quality, though it is very difficult to articulate what that quality is. We can point to it as “the processing that is particularly accessible to verbal reporting”, though that’s not a definition.
Nonetheless, we can sort our brain processes into those of which we are conscious, and those of which we are not. Some processes of cognition happen in such a way that we can report on them as they happen, like when we carefully search for the right word while forming a sentence, trying and dismissing several possibilities. Other processes happen “out of sight”, like when a sentence spills forth from our lips even though we could not report what we were going to say in advance.
In both kinds of cognition, a sufficiently-advanced brain-scanning machine could watch the sentences being formed in the brain. And such a machine could also presumably see some difference in the structure of the brain’s operation that distinguishes one kind of cognition from another. You could ask me, after my utterance, whether I formed my sentences “consciously” or not. Sometimes I might answer wrongly. I might misremember how accessible-to-reporting my cognition was at the time when I was forming the sentence. But the machine and I would agree often enough to indicate that there is some underlying property distinguishing some of my cognitive processes from others.
And this agreement would carry through a wide variety of processes besides sentence-formation, such as awareness of objects around me, awareness of memories, and so forth. Whenever I reported awareness of these processes, the machine would identify some characteristic feature of how those processes sat within all of the processing going on in my brain, and vice versa (within some reasonable margin of error).
That is the Materialist Consciousness Hypothesis. It is a reasonable belief that the above scenario is possible that justifies talk about “consciousness” on materialist grounds.

Our differences with “unconscious” organisms can be explained in many ways that don’t necessitate an invocation of consciousness. But perhaps I misunderstood you, or you me, because you seem quicker than to make that sort of logical mistake.
I agree that we should be able to explain everything, including our own actions, without invoking consciousness. But we should, in principle, be able to explain everything without invoking anything above fundamental particles. That is the reductionist thesis.
My claim is that, when we talk about consciousness, we are talking about some extremely complicated and messy property C, possessed by some computational processes, that ultimately could be defined entirely in computational terms. Conscious processes are those that have this, that, or the other algorithmic feature.
Any explanation about what the computational process did could be given at the level of “the process shuffled 1’s and 0’s around like so”, without any mention of consciousness. But algorithms can still have higher-level properties, even if these properties are entirely reducible to lower-level ones.

It should therefore come as no surprise that someone might conclude that “consciousness” is the ability of an organism to say it is conscious, and nothing more.
Just to be clear, is that you’re position?
Some might object: “but I know I am conscious – I am experiencing it right now!” Are you? Or are you and your statements (including the ones you make to yourself) nothing more than the deterministic result of the particles moving in your body according to the laws of physics, which happen to have organized in a way so that you say “I am experiencing consciousness”?
Why not both?
You don’t come out and say it, but alot of your post seems to come near to relying on a notion that if there is consciousness, determinism is false. Is that something you believe?
Personally, I find that if you look deep and close and hard, there is not much to support an “experience” of consciousness – it is nothing more than a tautology amounting to a computer programmed to say it is conscious, and to have a process by which it says it is conscious to itself in a complicated internal exchange of information.
To be conscious is to be aware of things. You can objectively measure creatures’ awareness of things. Hence, consciousness is objectively measurable.
Of course that’s not a universally accepted definition of consciousness. But it’s the one I’ll go with for now.
Of some relevance is the fact that studies have shown that our decisions are made (at least the ones tested) before we are consciously aware of them.
Try rewording that so it doesn’t assume the very thing you’re trying to disprove! You’re trying to say there is no consciousness, but in your formulation of this bit of evidence for your position, you actually mention the existence of times people are “consciously aware” of something!

Okay, now what is this ability? Without using mere synonyms (like “subjective experience”), just what is it that you’re rejecting when you reject consciousness?
I reject that the term has any meaning. I can’t be faulted for defining consciousness as “subjective experience”, because that is a common definition of this incoherent concept. My issue with the term is:
- It is not well-defined, and if it is, it is defined tautologically
- I reject the term because I find it meaningless. I don’t see how it describes anything real beyond what is already described by “an automaton able to communicate that it possesses something it calls consciousness”
- Since I reject the term, I reject it as a valid description of myself. I can therefore say that I am “not conscious”.
- My personal “experience” is the ability to write what I am writing now, and if I want to I can describe my current cogitation as “consciousness”. But it is not clear to me I am “experiencing” anything at all; but I do have the ability to write that I do, without myself even grasping what such an “experience” really is.
Feel free to propose a coherent definition that I can try to grapple with.

Just in case, I’ll back up a bit and recapitulate my position on what we’re talking about when we talk about consciousness. This position could be called the Materialist Consciousness Hypothesis.
We can make out, in a vague sort of way, that some of the processing in our brain has a distinctive quality, though it is very difficult to articulate what that quality is. We can point to it as “the processing that is particularly accessible to verbal reporting”, though that’s not a definition.
Nonetheless, we can sort our brain processes into those of which we are conscious, and those of which we are not. Some processes of cognition happen in such a way that we can report on them as they happen, like when we carefully search for the right word while forming a sentence, trying and dismissing several possibilities. Other processes happen “out of sight”, like when a sentence spills forth from our lips even though we could not report what we were going to say in advance.
In both kinds of cognition, a sufficiently-advanced brain-scanning machine could watch the sentences being formed in the brain. And such a machine could also presumably see some difference in the structure of the brain’s operation that distinguishes one kind of cognition from another. You could ask me, after my utterance, whether I formed my sentences “consciously” or not. Sometimes I might answer wrongly. I might misremember how accessible-to-reporting my cognition was at the time when I was forming the sentence. But the machine and I would agree often enough to indicate that there is some underlying property distinguishing some of my cognitive processes from others.
And this agreement would carry through a wide variety of processes besides sentence-formation, such as awareness of objects around me, awareness of memories, and so forth. Whenever I reported awareness of these processes, the machine would identify some characteristic feature of how those processes sat within all of the processing going on in my brain, and vice versa (within some reasonable margin of error).
That is the Materialist Consciousness Hypothesis. It is a reasonable belief that the above scenario is possible that justifies talk about “consciousness” on materialist grounds.
I agree with you that with sufficient technology we could correlate certain brain activity with patient’s “reports” of “consciousness”. That would be very interesting, but I don’t see how it would validate any “subjective experience” as a real phenomenon. It would only validate the reporting of subjective experience as being correlated with real physical brain activity.

I agree that we should be able to explain everything, including our own actions, without invoking consciousness. But we should, in principle, be able to explain everything without invoking anything above fundamental particles. That is the reductionist thesis.
My claim is that, when we talk about consciousness, we are talking about some extremely complicated and messy property C, possessed by some computational processes, that ultimately could be defined entirely in computational terms. Conscious processes are those that have this, that, or the other algorithmic feature.
Any explanation about what the computational process did could be given at the level of “the process shuffled 1’s and 0’s around like so”, without any mention of consciousness. But algorithms can still have higher-level properties, even if these properties are entirely reducible to lower-level ones.
Well, hey, I agree with everything you’ve presented in this post, probably because you are avoiding any discussion of the “subjective” “experience” that is generally used to describe consciousness.

Just to be clear, is that you’re position?
Yep.

Why not both?
‘both’ includes that which I reject, the point of my OP…
BTW, anyone know how to deal with quoting quoted posts? I wish I could reply to your questions with the quotes they were referring to included. Can’t figure it out.

You don’t come out and say it, but alot of your post seems to come near to relying on a notion that if there is consciousness, determinism is false. Is that something you believe?
Well, I don’t need to rely on the notion, but it’s true that I believe that consciousness, as I understand it is generally understood, is in contradiction with determinism. But it is not because I am conflating the two concepts (see other posts).

To be conscious is to be aware of things. You can objectively measure creatures’ awareness of things. Hence, consciousness is objectively measurable.
Of course that’s not a universally accepted definition of consciousness. But it’s the one I’ll go with for now.
Bad definition IMHO. An ant is aware of things. Is it ‘conscious’?

Try rewording that so it doesn’t assume the very thing you’re trying to disprove! You’re trying to say there is no consciousness, but in your formulation of this bit of evidence for your position, you actually mention the existence of times people are “consciously aware” of something!
Bad logic. First of all, it is FALSE that one cannot use a term you are trying to disprove. The basic logical form is called proof by contradiction. That is not to say I am employing specifically a proof by contradiction, just that it is not logically unsound to do what I have done. In any case, I have to use the term I am rejecting from time to time, if I am to discuss the subject, and the meaning needs to be gathered intelligently from the context. In this case, the context is a form of medical test in which decisions by participants are found to be made before they are able to communicate them to the examiner in some fashion. That is what was meant by "consciously aware’.

Are you a fan of Daniel Dennett, by any chance? I enjoy what I’ve read and seen of him, but I can’t agree with him on this point.
The OP is postulating the exact opposite of what Dennett says - some variant of the “Hard Problem of Consciousness”, it looks to me. Dennett would say that saying “I am conscious” and believing it, is the same as being conscious - he denies the HPoC is coherent. I agree with him.

The OP is postulating the exact opposite of what Dennett says - some variant of the “Hard Problem of Consciousness”, it looks to me. Dennett would say that saying “I am conscious” and believing it, is the same as being conscious - he denies the HPoC is coherent. I agree with him.
Like I said later, I haven’t read his book on the subject, nor Chalmers’ for that matter, both of which I really should.