Mary and Qualia

Do we know that such a machine is possible?

When you say indistinguishable - do you mean how it’s constructed or do you just mean from an external perspective of someone interacting with it?

If you just mean from an external interaction perspective then that seems possible, if you are including the internal operation then it doesn’t seem possible.

Okay, Ludovic thinks holding a position is begging the question. Ima remember that.

I’ll admit you might be right. I told myself to re-read “quining qualia” (I haven’t read it since before I began graduate studies many long years ago) before participating seriously in this thread. I should have taken my own advice.

I meant from an external perspective–but in fact there are plenty of people who think it is metaphysically possible, and for a few even physically possible, to have a physical object indistinguishable from a human being both in form and operation which nevertheless has no qualitative experiences.

What does “metaphysically possible” mean? That’s half the debate it seems to me ATM. (I’m just in the middle of beginning to jump into the subject though so take what I say with a grain…)

I have real difficulty swallowing this in the case of Mary, who is limited by her own brain’s hardware. I don’t have such a problem in the case of an artificial intelligence constructed with different limitations but there are things that are either beyond us or have a built-in limitation, for good reasons.

The ability to over-ride pain for example becomes detrimental to survival if taken too far, and the ability to activate all the fibres in a muscle at once would give you super-strength but can lead to tendon damage or bone fracture, as victims of certain types of electric shock can testify. There are things we simply cannot do. While I accept that in theory, an entity could synthesize the experience of red from information about “red” received in an entirely different form, I don’t think human conciousness has access to the necessary levels of brain function to actually do it.

I sometimes like to tickle my brain’s hardware with the Checker Shadow Illusion, pasting the image into Paint and then gradually building a grey bridge between A and B until my brain decides they are the same shade after all. I seriously doubt that any level of neurological knowledge would let me do this at will however.

Why do qualia have to be “distinct” to exist? Why can’t qualia just be a different form of representation at the brain hardware level? The experience of qualia, e.g. redness, is a particular configuration of the matter inside Mary’s skull. Her “complete neurological knowledge of redness” is a different configuration of the matter inside her skull. I don’t see why the second configuration prevents the first configuration from being novel, the first time Mary’s retinas send some colour information to her brain.

ISTM that the question shouldn’t turn on what actual human brains are capable of. We’ve already almost stipulated Mary’s humanity out of the picture when we say she knows everything there is to know, neurologically, when it comes to human reactions to red stimuli. Will any actual human being ever really have this kind of knowledge? Doubtful–it would involve knowing the states of billions (trillions?) of neurons, for one thing.

You might allow that she is able to use computers to run simulations etc, but now we start getting into hazy (and to me interesting) questions about the extent to which this counts as “knowing” the stuff in question. Do I know how to add because I can use a calculator?

It’d be best if we can avoid going that route, if only for practical purposes involving keeping the conversation moving forward.

The idea isn’t to ask what physiological actual humans could do. rather, the idea is to ask whether the physical information–i.e. all the facts about neurology–buys you the qualia for free.

People on the thread have complained about arguments from intuition–arguments that procede from a judgment that something is “just obvious.” The Chinese Room experiment is often mentioned in this connection, but I disagree that that argument is intended to procede from intuition in this way. Sure it’s obvious to many that the man in the room doesn’t understand Chinese, but the argument doesn’t require it to be obvious. There is plenty of good evidence he doesn’t understand it. Ask him what he just said, for example. He won’t be able to. Ask him about the opinions of the embodied chinese personality concerning hamburgers. He won’t be able to. Best explanation? He doesn’t understand Chinese. If anything does, it’s not the guy in the room.

With the Mary experiment, I also don’t think this is supposed to proced from an “it’s just obvious” assumption. Nor is it supposed to procede from an “I can’t imagine how this could be” assumption. (I’d be very surprised if anyone could find anything published in a Philosophy journal since, say, 1950, which tried to argue in this way. You’ll find plenty of people accusing others of arguing this way–but I don’t think I’ve ever seen a case where the accusation can actually be made to stick.) In any case, I’d grant that if that’s how the argument’s supposed to go, then it’s a bad one. But in any case, it appears to me that the Mary scenario is simply supposed to act as an illustration, basically an illustration of an argument like the one I gave above: “This is what it’s like to see red” is not a fundamental physical fact, nor is it logically derivable from any fundamental physical facts together with any stipulative definitions, therefore, “this is what it’s like to see red” is not a physical fact; since it is a fact, it follows that there are non-physical facts.

I’d think the point is not to ask engineering questions about what a real live Mary could actually do with herself in such a room–Mary is a fantastical figure in the scenario as described, so it’s no use asking what she could ‘really’ do. Rather, I’d think the point is to answer an argument like the one I just summarized, and perhaps illustrate the answer using imagery from the same scenario. But the scenario is not an argument in itself–it’s an illustration.

Are illustrations valuable? The can be, though the value of this one might be debatable.

RaftPeople has answerd the summarized argument by saying physical facts aren’t just fundamental physical facts together with those logically derivable from the fundamental ones. He said at one point that there’s another class of physical facts–ones like “this is what it’s like to see red.” (You might say, also, that he’s saying these should be included in the class of fundamental physical facts.) Elsewhere, he’s said that there is physical information that we can’t learn about from knowing about particle positions etc, and rather, can only learn about through perceptual processes. The problem with this is, unless it can be shown that this additional information (or this expanded class of physical facts) makes some kind of causal difference, then this view isn’t really physicalism any more. I mean, call it what one likes, but it doesn’t answer to the rational needs of those who want to maintain that every truth is a physical truth. If causally non-efficacious phenomena (epiphenomena I guess?) get to be called “physical” then the term “physical” has become much less useful than it was before.

What about denying that “this is what it’s like to see red” is a fact at all. Does Dennett deny this by denying that qualia exist?

ETA: ^proceed. I don’t know why I always do that. Anyway, there’s just too many of 'em so I’ll just leave the correction as a footnote.

Okay. I don’t think it hurts to make it explicit though.

A whole 'nother thread, and I agree: interesting and not all that clear-cut!

Okay. Nice! I’m not sure either way that “this is what it’s like to see red” is NOT a physical fact. I think we run into barriers of language and concept really fast. If I accept that super-Mary can figure out what it would be like for her to see red, do I have to accept that she can also figure out what it’s like for super-Jane? And if she can, to what degree does she have to “emulate” super-Jane’s internal configuration? Because if she has to “fully” emulate super-Jane to figure out super-Jane’s red experience, I’m not sure super-Mary really knows what super-Jane experiences at all.

Not "a"position, but holding that position (“subjective experiences exist”) most definitely is, i*n this particular debate.
*

Because then you’ve changed the definition of qualia, which is all very well, but that’s not how debating their existence goes when there is an existing definition. Qualia are supposed to be distinct units - “redness” as distinct from “saltiness” etc.

Because it is not proven that the experience of redness is a particular configuration. You’re making a circular argument.

I don’t know about Dennett, but I do. There is no “this is what it is like to see red” absent the pure physiological.

I think I’ve failed to communicate properly. I was using “configuration” as a shorthand for the “pure physiological” arrangement that arises when Mary sees red, e.g. stimulated cones in her retinas, signals travelling down her optic nerves, the resulting patterns of firing neurons etc. The “configuration” is the sum total of all the stuff super-Mary has to know about in order to be able know what red looks like without ever having seen red, or the things robo-Mary has to emulate.

Wiki’s page on qualia gives a few, fairly similar definitions, all of which include an element about subjective experiences but also seem to have an element of “ineffabillty” tacked on. (Dennett stipulates ineffability; Jackson, “which no amount of purely physical information includes”). I didn’t realise ineffability was such a vital part of the definition of qualia, although on more careful reading you were nicely specific: “Denial of qualia isn’t a denial of the existence of subjective experience, though. It’s a denial that subjective experience is encapsulated in discrete entities incapable of investigation.” In that case, I agree qualia don’t exist in principle.

First of all, a reflex is maybe not the best example to use, as in reflexes much or all of the response occurs without being processed by the brain.

For the other point, note that “feelings only come on self-reflection” is a hypothesis. This is a weird topic, in that so many people are happy to be dismissive of a phenomenon, because there exists a hypothesis that makes intuitive sense to them.

I’ve had the same thought (I didn’t realize it wasn’t an original thought). However, it’s still only a binary “yes / no I have subjective experience”, which requires sentience (Does subjective experience require sentience? I don’t know. Nor do I know how we could know).
And our robot cannot describe its subjective states. Getting our robot to describe how gamma rays appear to it will not help me visualize gamma rays.

Actually, what am I saying?
It’s not a hypothesis of anything, it’s a disputed insight.

Well, a non-vicious circle (or virtuous circle) when it comes to argumentation, to me, is something that breaks a regress through appealing to a gradation underlying an apparently sharp divide. For instance, living and dead might be taken as being two wholly disjoint sets. This leads to the well-known fallacious argument that, since a living thing can only come from a living thing, there must always have been living things since the beginning of the cosmos, or else for all eternity. We tend to think in such sharp categories, which is the reason for many, often long-held, misconceptions. What breaks the regress here is the realization that a gradation between living and dead exists, upon which a positive feedback can act to produce (‘more and more’) alive things from dead things.

In the present case, I think that the sharp divide we see between conscious and non-conscious states may not be quite that sharp. The circularity in the argument enters because of my use of words like ‘appears’, ‘seems to’, and ‘conclude’ when referring to machines, which, lacking a first-person point of view, aren’t generally thought to be the kinds of things that might conclude something, or that something might appear to in a certain way. However, I did this purely for terminological convenience, essentially speaking metaphorically. So let me denote with scare quotes whenever something “appears” to something with an empty mill for a brain (i.e. when some process within its algorithms has as an end product the lighting up of a small lamp that has ‘appear’ written underneath it), using the undecorated word whenever it appears to be a conscious being that something appears to. The allegation of question-begging then basically amounts to saying that I said appear when I should have said “appear”, and that there’s a difference between the two – in that case, the circle is vicious. If, however, there is no, or only a gradual, difference between the two, there would still be a circularity – as I start from and end at appearances (or “appearances”) – however, it would not be vicious. Does that seem about right?

So, let’s go back to our zombots, A and B (after all, what use would a zombot have for a proper name?). A interrogates B, and, according to the zombot hypothesis, “concludes” that B is conscious. It “appears” to A that B is conscious. Then again, A interrogates itself. And again, A “concludes” that it must be conscious – the little lamp labelled ‘test subject is conscious’ lights up. To A, it “appears” that it is conscious.

But now put yourself in A’s shoes. You try to find out whether you are conscious, and sure enough, it “seems” to you that you are. It “seems” to you that you possess all the characteristics a conscious being should possess (even though you really don’t). Thus, to you – and to A – “appearances” and appearances appear (or “appear”?) indistinguishable. How could you then tell whether you are Frybot, profoundly deceived into “believing” itself conscious, or Frylock, actually experiencing your own conscious existence?

Well, if you are Frybot, you couldn’t. But maybe, if you are Frylock, you’d be able to – while the poor zombots can’t tell “appearances” from appearances (since otherwise, they could “conclude” themselves non-conscious after all, if they note that things merely “appear” to them), maybe proper conscious beings can. That’s the tempting way out: “Well, maybe the zombot can’t tell, and acts and talks and perhaps even “thinks” as if he was conscious, but the fact of the matter is he doesn’t possess any inner ‘stream of consciousness’, as I do – there is thus a difference in our mental content, which I can point to, even while he can’t.”

And sure enough, it may be possible that an ‘actually conscious’ being just knows that it is actually conscious. But, here the circle closes in on itself, and you are faced with the same quandary once iterated: Are you Frylock, knowing that you are actually conscious; or are you Frybot, deceived into “believing” that you know you are actually conscious?

There’s no way to tell, and any attempt to find one just loops once more; but then, you may well actually be Frybot – and if you wish to continue calling whatever goes on in your head consciousness, then you must conclude that Frybot is conscious, and that there is no difference between “appearances” and appearances – because, in the end, there’s nobody there to tell the difference.

Another way to look at this is to realize that it’s an unexamined item of our perception of the world to distinguish between the real-seemings and the real-beings: real-seemings are how, to us, something appears to be, which we can be mistaken about; real-beings are how things actually are, which is taken to be an absolute, objective fact. A simple example would be if in a person you met in the street, you recognized your old friend Jack; however, unbeknownst to you, the person is actually Jim, who kinda looks a bit like Jack when the light is right, whom you mistook for Jack. The real-seeming is ‘the person on the other side of the street is Jack’; the real-being is ‘the person on the other side of the street is Jim’.

This works well for most circumstances. However, in introspection, i.e. in the act of our mind perceiving itself, all that there is are our perceptions – our mind is our perception of our mind, nothing else. There’s no difference between the real-seemings and the real-beings in this case; what seems to go on in our minds is what actually does go on in our minds. We can’t think we think about something, while we actually thought about something else; we can’t just believe we have tasted something, having ‘actually’ tasted something else – whatever we believe we tasted, is what we tasted! Yet, going with a strategy that works well with almost everything else as we are wont to, we tend to act as if there were some objective fact of the matter – some real-being – to our mental content, that might be distinct from the real-seeming of it. Thus, we tend to believe that there is some difference between having subjective states and merely being deceived into acting as if one had subjective states – but there isn’t; the illusion of having subjective states and actually having subjective states are, in fact, identical.

So there is no difference between your being Frylock and your being Frybot – in one case, you would have subjective states. In the other, you would have the illusion of having subjective states. But, those are the same thing.

Or, the concise version:

“Since there’s no such thing as the illusion of a subjective state, subjective states are an illusion”.

I don’t think it’s an argument, but if it was, I would suggest that “illusion” is being used in two different ways in the concise version, and more than two ways in the verbose argument.


I suggest that a lot of the desire to slay the qualia dragon comes from a suspicion that it’s trying to shoehorn the soul back into the picture. That is certainly not my intention.

And I’m not pointing out the limits of what we can apparently study because I want to declare certain things to be off-limits.
Quite the opposite, in fact: I find it frustrating. I don’t just want to study how the brain makes the sensation of colour: I want to be able to study the colours themselves. At the moment, both are unknowns, but the latter appears unknowable.

Actually, let me change that: both of the above appear unknowable.

:confused: I’m not sure where you get that from. If you want it sloganized, it’s more like: “Illusory subjective states are indistinguishable from actual subjective states. Thus, anything possessing the illusion of having subjective states, possesses subjective states (or at least, can’t tell that it doesn’t).” Well, that’s still not much of a slogan. Perhaps “conscious is as conscious does”?

If you think that’s bull, then tell me, how would you tell that you aren’t ‘really’ conscious if you were just a zombot?

And I can’t speak for others, but at least in my case, the desire to get rid of qualia stems from my belief that they aren’t consistent as a concept.

What’s the inconsistency in the concept?

If that was all you were saying, then I agree completely.
In fact, it’s usually guys on the “pro-qualia” side that are most keen to stress that the illusion of a subjective state would itself be a subjective state.

What I dispute is the sleight of hand that means we can apparently conclude that we’re all zombies, even while it goes against the premises of the argument.

A zombot would probably struggle to understand the concept of “qualia”. That’s not surprising: we struggle to understand qualia too. The difference is that we view the world through a lens of subjective states, and cannot doubt their existence.
Provided the question was put correctly, I propose that a zombot may indeed conclude that it is not conscious. This is why, as others have suggested upthread, it’s meaningful if we ask a sentient being whether it is conscious and it replies “yes”.

Probably it is poorly-defined. But that’s just a result of the fact that these concepts are inherently difficult to articulate. Indeed, that’s a key part of the argument here: that I can’t describe a subjective state to you.
Note that other subjective concepts, like “illusion”, also tend to be used in quite a loose way.