Mary and Qualia

Thanks for the replies. There may be other interesting points to respond to (lots of detail in those posts and I only have a little time right now), but for now I will just start here:

Because the way that it appears (to me) that we store information, this would seem to be a reasonable statement from the perspective of information stored within the human brain. I don’t think you can talk about “information” in an abstract sense and think that the abstract notion of information maps perfectly well into a human brain.

Humans don’t experience information the exact same way they experience sensory input (IMO).

So, even if you had all of the information stored in your head regarding microwaves and how they would interact with a theoretical sensor and how they would trigger various reactions within the brain - the specific neurons that actually get stimulated when you hook up the sensor, and the neurons they are connected to in different areas of the brain (emotions, etc.) will ultimately create an experience (and will store information) that is different from the theoretical (IMO).

So, maybe this an attempt to say what MrDibble was indicating Dennet said: that it’s a failure of imagination. If so, I believe the failure is that humans are not wired to use imagination to translate facts into the specific neural activity that would represent those facts to the degree required. I know the brain is powerful and does this somewhat, but I also believe that it is drawing on information that was stored previously due to experience, which again (IMO) can be different than information stored through indirect means.

Ah, I think I see what you’re saying now.

If we imagine that I have “buttons” on my brain, there is no abstract information or introspection that can press the “agonizing pain button”, or do the same thing to me that pressing that button does.

This is no accident of course. If we could simply imagine every sensation of which we’re capable, in perfect vividness, our species would have died out long ago because we would not have engaged so much with the real world.

I think the Mary argument is simply trying to simplify the situation.


Consider the following:

I hand you a machine; let’s say “Newton’s Cradle” (the executive toy with the balls on strings). We understand how this device works very well. And, indeed, there is no reason to suppose there is any part of how this device works that is closed to science (on a macro level at least, obviously there’s uncertainty at the quantum level).
Put 10,000 newton cradles together and still we have no reason to suppose it is suddenly beyond science.

So, naively, we might expect exactly the same thing with brains. We understand a great deal about neurons, neurotransmitters and the like. Not a complete understanding, but there’s no reason to suppose any of it should be off-limits to science, right?

Well, that’s what the Mary argument is countering. Somehow, brains produce subjective experiences. And it is not possible to gather this subjective data objectively (as you’ve alluded). It’s a critical difference between brains and abstract machines, and at the moment, it remains unsolved.

(It’s important to realize that it is not just a scaling issue. It’s not like we can see that neurons have basic “experiences” and we don’t know how they combine into the complex experiences that humans have. It’s that there is no theory at this time for how anything can experience anything at all).

Denial of qualia isn’t a denial of the existence of subjective experience, though. It’s a denial that subjective experience is encapsulated in discrete entities incapable of investigation. The idea, AFAICT, is that if you can’t tell whether a current subjective experience is a quale or not (as in dennett’s IntPump of changing the *memory *of red), how then can you say qualia exist independent of process? You can’t, and pure physicalism is unsullied.

Why not? We have many tools at our disposal for scientific investigation of subjective experience. What, exactly, is immune to this approach?

Yes, that is what I was saying.

It may not be exactly a scaling issue, but it seems to me that it is possibly a structure issue. Our brains require a pretty specific structure to operate correctly. Differences that seem relatively small cause things like schizophrenia, etc.

So, if we talk about lot’s of Newton’s Cradle’s, or a gigantic procedural computer program, or a Chinese room, it’s hard to imagine how anything that is aware could pop out of that.

But when I picture a machine that has the types of functions a human brain has - something that is compressing, storing, retrieving information the way we do as well as predicting and building a model of the environment and it’s own interactions within it, including doing the same model building and prediction for purely abstract thoughts - then I am able to convince myself that this abstract machine could have awareness like I do.

This seems correct to me.

The subjective aspect. You and I can talk about colour all day long; we can check whether we see the same optical illusions for example. But ultimately, there are aspects of sight that cannot be conveyed or experienced, except directly.

And for me, that’s not even the issue. The main issue is not “How can we describe these phenomena” it’s “How can they exist at all?”. “How can there be subjective phenomena?”

I have a postgrad degree in neuroscience. I have no problem imagining an abstract machine behaving and appearing identical to me (though it remains astonishing what our minds manage to accomplish).

When it comes to qualia though, not only can I not imagine an abstract machine having qualia, I can’t imagine my mind having qualia. I have no hypothesis for how matter can be made to suffer. It makes no sense.

But I experience qualia, so it’s the “elephant in the room”.

Well, I will have to admit that when I convince myself that a machine can be “aware” if constructed properly, there is a leap of faith I am taking that the “realness/aliveness” sense that I feel is just a byproduct of the structure of my brain. It feels like there is something magic there, but the logical side of me concludes there is no secret ingredient that distinguishes my body from that of a worm.

Buddhist monks have been investigating them for centuries.

Pretty much. Gather all the physical facts you like, you won’t dent the essence of subjective experience. Which means it doesn’t exist and your whole internal life is a lie. Disturbing.

I disagree.

I think the key is that the brain doesn’t actually have to generate subjective experiences – it just has to make you believe it does. (This seems circular, but bear with me.)

Most posters in this thread seem to agree that a machine could be constructed that, to all outside probes, appears indistinguishable to a human being, i.e. to a being possessing conscious, inner states, in response. But if that’s the case, then constructing a machine that is to itself indistinguishable from something possessing conscious states is just as easy – all it could do to find out about its own consciousness or lack thereof is in fact the same any outsider could do: ask questions of itself; so if it manages to appear conscious to an outsider, why wouldn’t it appear thus to itself?

It’s similar to how I argued visualization works over in the other thread. Whenever we visualize something, it seems to us as if there is an image of the visualized object present in the mind. But, just for it to seem that way, there doesn’t really have to be an actual image – it merely has to, well, seem as if there were. In fact, if your mind has the capacity of generating a visualization, it would be a waste of computation to actually do so, since all the information it could gather from creating such a visualization, it would need to already have access to in order to create said visualization! It’s much more practical to react to any introspective probes – questions you ask of yourself – ‘as if’ there were a visualized object actually present. And in your mind, what seems to be and what is are indistinguishable – there is no fact of the matter separating both.

This argument can be extended to representing other things things to yourself – speech acts, sounds, smells, feelings, and even, it seems to me, the self itself. After all, what’s the self but a representation of the self?

So colour can be described entirely objectively? So Mary would not be surprised by actually seeing red? Is that what you’re saying?


Earlier I mentioned how the logic “It makes no sense to me, therefore it must be an illusion” is a common fallacy in philosophy and metaphysics.
The important thing, IMO, is the predictive power of any theory. So if we were to say that X is epiphenomenal, or even an illusion, that’s fine if we can use it to make accurate predictions.
If we can’t, then there’s a genuine unknown here and we’re merely being evasive.

Let’s say I make a machine that can detect damage to its body. I program it to resist / avoid such damage.
Does this automatically mean it has a “bad feeling” associated with such damage? If not, what is the difference? And how will we tell when we’ve genuinely made a suffering robot?

Here I think you (and to my recollection, Dennett) are begging the question. On the opposing view, there could be a machine indistinguishable from human from an objective point of view, but which has no subjective point of view. Such a machine, then, can’t “find out about its own consciousness” and it can’t “appear to itself” to be conscious.

Someone holding the opposing view, then, hasn’t been given a reason to think it’s “just as easy” to make a machine that’s indistinguishable from a human “to itself” since the opponent hasn’t yet been given a reason to think there’s necessarily any such thing as how the machine “seems to itself” much less how anything “seems” to it.

What that may be true, the pro-qualia side also begs the question, in assuming that humans do experience a subjective point of view.

Yes. That’s what I’m saying.

No. Feelings only come on self-reflection. If you don’t program that into your robot, then it doesn’t have any feelings. But neither do I have a “bad feeling” when I reflexively jerk my arm away from a flame. The bad feeling develops later.

Time, experience and multiplicities of view.

When it tells us such, and we have no good reason to doubt it.

Yes, that’s the circularity I mentioned. The reason I don’t think it’s vicious is essentially the argument in the other part of my post.

Perhaps think of two such zombie-machines, one (A) tasked with finding out whether the other (B) is conscious. Since A can perform any cognitive task as well as a typical human, it will do equally well also at this one; since B can deceive every human into thinking it is conscious, A must come to the conclusion that B is conscious.

Now task A with determining whether it is conscious itself. Well, obviously, it must come to the conclusion that indeed, it is! And when it comes down to it, that’s all you can say of yourself, as well – to all of your probes, you appear conscious. This appearance is exactly what consciousness is, after all.

Holding a position is not begging the question. You have to cite the argument which you think assumes the thing it sets out to prove.

Still seems question begging in the same way, which I think you probably knew–as you said, you’re trying to illustrate the way in which the circularity isn’t vicious. I’m not sure what non-vicious circularity looks like so I’m not sure how to evaluate the claim. (I know people make the distinction between vicious and non-vicious circularity, I just don’t know what the paradigmatic examples of the latter are supposed to be.)

Dennett and the dualists (and the ones who skate on the edge of dualism) all seem to agree on one thing–that qualia aren’t accessible to objective investigation, and hence if science is concerned only with the objective, then they’re not accessible to science. From this, the dualist concludes that the world that can be studied by science isn’t the whole world, while Dennett concludes that since the world that can be studied by science is the whole world, qualia must not be part of reality.

I have no idea how to adjudicate between these two views. One man’s modus ponens is another’s modus tollens as they say… How do you decide between “there are realities inaccessible to scientific investigation” and “apparent realities inaccessible to scientific investigation are illusory”?

One thing I can say is that Dennett’s doing something which I think a lot of science-minded philosophers do and which I think is a mistake–taking a methodological ontology and mistaking it for the ontology of the world. Millikan thinks, correctly, that no biologist could attribute biofunctions to Swampman, then concludes–Incorrectly!–that Swampman therefore has no biofunctions. Similarly, Dennett argues, perhaps correctly, that Qualia can’t be studied scientifically, then concludes, invalidly, that there are no qualia.

Yes it is.

– Assume that humans have subjective experiences
– Attempt to prove that these subjective experiences are unique to humans
– Therefore, we have proven something philosophically deep about subjective experiences. Let us call this “qualia”.

That’s not Dennett’s argument. Dennett’s argument is that qualia make no sense as a philosophical concept. That’s not the same thing at all. He’s saying all the supposed features of qualia, that make them a distinct class of entities, are incoherent when examined thoroughly. Not that they can’t be studied, but that they don’t exist. Also, that our sense experiences *are *able to be studied and compared scientifically and objectively.