I believe I was taking too literal a reading of the p-zombie definition, most of what I posted was on the understanding that “indistinguishable” meant down to the smallest measurable bit of matter/energy.
My point was that it may be possible and it may be impossible - we don’t know. I for one can see why it can be impossible (if consciousness and quality of is determined by physical structure, which evidence points to, including your damaged brain and autistic points), not that it is guaranteed, but why it can be impossible, thus rendering that statement not very helpful.
But again, I see the value in using it to discuss things, we just need to be very careful in our conclusions when a p-zombie is involved.
To begin with, please forgive me for anything I say that is too blatantly obvious… I’d never heard the p-zombie idea before, and I’m finding it fascinating!
It reminds me a little of the “brain in a bottle” variant on Cartesian doubt. Nowadays, people often use a “holodeck” metaphor. Poor Descartes only had dreams, madness, and demonic deceit to work with. Technology has enriched our (imaginary!) contexts!
I think, in any kind of “real practice,” a zombie world would require intentional deceit on someone’s part. The zombies would have to be animated – manipulated like marionettes – by some distant mind – or sufficiently sophisticated program – with the purpose of making us think they were people like us. I think, in “real practice,” zombies would exhibit divergent behaviors – just as Autistic persons do – that would enable us to discern their nature.
So, in that sense, it sort of devolves to a “holodeck” deceit after all.
But a limited variant of the zombie idea could work.
Suppose, just for the sake of sayin’… Suppose you (or I) were a mutant, and had the faculty of a completely new emotion. A brand new, distinct, separate emotion that no one else on earth has.
Is it possible? I think so… (An even milder variant is that we might simply be able to see farther into the UV range than anyone else. We “see things” that no one else sees.) If that is plausible, why not a “new emotion.” It “feels” real to us, but we can’t explain it.
(This was the theme of a very haunting cartoon on the old “PLIF” web comic. I’d put a link, but I’d have to do a lot of searching…)
So, with respect to that emotion, everyone else on earth is a kind of zombie: they can’t feel what we feel.
To be honest…I’m not sure where this leads…if anywhere at all…
(I’ve never really been able to grok “qualia” discussions!)
Grin! For some really stupid reason, I thought I was disagreeing with you, but you’ve summed up my opinions perfectly, so…never mind!
I confess that I wasn’t able to comprehend much of the middle parts of Roger Penrose’s book “The Emperor’s New Mind.” But as far as I could tell, he really said all he had to say in his quip, when he was (hypothetically) asking an Artificial Intelligence, “What does it feel like?”
To Penrose, this was the big “gotcha.” The trump of all trumps. Because, of course, a computer, or robot, or android, or synthezoid, or “Chinese Room” can’t “feel” – one senses the need to put it in radioactive quivering capital letters – “FEEL!” – the same way we can.
Is it really necessary to make the “AI” out of carbon compounds, built up into neurons, with acetylcholine reactions and saline ion exchanges, in order for it to “FEEL” anything? Penrose apparently believed so…and I believe otherwise. To Penrose, the best AI can ever be is a “zombie,” but I think that (once the obvious technological hurdles are surmounted) the AI could be every bit as much a “feeling” person as I am.
ETA, and to bring it back to the OP: once we do build an AI mind, we could simply build it with twice as many neurons (or equivalent) as a human brain, and, voila, the human brain would no longer be the most complex thing on earth!
See, to me it seems trivially obvious that one could mimic the human’s responses perfectly and not have qualia. Do you believe robots that can distinguish between wavelengths have qualia?
I think he explained fairly well that the fact that it can’t be communicated implies that it contains no information, which implies that it has no physical basis. From what you’ve said before, I take it you mean “it can’t be communicated in practice” rather than in theory.
OK, let’s try something else. Think of an ant colony. Ants (assume this for the sake of argument) can be organized, or may self-organize, to carry out complex computations. Ants can then be organized to compute, say, the actions of an individual neuron. But then, given sufficiently many such simulated neurons, one can simulate an entire brain on a (sufficiently big) anthill. Would the operations of these ants give rise to a conscious entity? One having subjective states, one which it feels like something to be? If you say yes, then you’ll have to tell me how that works; but if you say no, then you’re a believer in the non-physical reality of qualia.
But it becomes very problematic to explain the unity of our experience – I mean, it’s of course true that different regions of our brain, and to a certain extent maybe different regions of our mind, are committed to distinct functionality. But ultimately, it all has to cohere; the supposition that there are truly disjunct parts does not seem easy to reconcile with this coherence.
But that’s what information is all about – all kinds of information can ultimately be reduced to a string of some symbols; there are no ephemeral qualities that distinguish one string of symbols from another. In fact, one could say that such qualities are exactly what qualia are, so if you think there’s a fundamental difference to the color quale generating module whether it gets information from another part of the brain, or from the eyes – i.e. if you believe there is a qualitative difference in the kind of information it receives from these distinct sources --, then this is just the qualia idea in different clothing.
That’s exactly what they’re commonly thought to be, so I’m not sure I see your point…
But it’s perfectly possible, in an a priori sense, for our world to be flat! As you say, only empirical data convinces us that it isn’t: the possibility isn’t actual, but it’s nevertheless a possibility. The p-zombie idea has to be seen in the same light: it’s possible that there is a physically identical world, whose inhabitants lack the fundamentally subjective states of experience we take ourselves to possess. But then, the physical description of the world is evidently not complete!
I’ve given an argument for the realizability of such a world, through gradual fine-graining of the simulation of this world: Start with crudely modelled p-zombies, which process inputs into outputs by means of an enormous lookup table. I.e. for each stimulus, the zombie just finds it in its table, and produces an appropriate reaction. This table would have to be beyond huge, but for any finite stretch of time, nevertheless finite, so it could in principle be made to match any kind of observation one could perform. It seems clear, at least to me, that in this world, there exist no subjective mental states.
But one can enhance the simulations performance greatly, by modeling not just the outward, but also the inward behaviour of the zombies. At each stage, one can just replace the huge lookup table by a set of smaller ones – that, say, code for the behaviour of certain brain regions, or other organs, etc. – linked with some appropriate algorithm. Now, the lookup tables, as per the previous arguments, don’t appear to create subjective states; and neither, it would seem to me, does any algorithmic linking of these tables (indeed, those algorithms can just be viewed as lookup tables themselves).
But you can take this continued fine-graining down to any level you desire; all of the physics we know can be represented in such a manner. So you can model the individual cells, the neurons, the atoms making them up, the elementary particle interactions, etc. All just lookup tables and algorithms. At no point does it seem like we are faced with the necessary emergence of anything like subjective states – we can always view the zombie just as an enormously complicated machine that generates outputs based on inputs, and whose outputs are determined through mere algorithmic manipulation from the inputs alone. At least to me, it is not at all obvious that such beings would have mental, subjective states, i.e. qualia – indeed, the argument seems quite strongly to suggest the contrary! (Nevertheless, as I have mentioned, I’m not a qualophile – but the arguments, in my opinion, deserve careful consideration anyway.)
Our internal subjective states that can’t be communicated also are an integral part of our processing and resulting actions.
When buying a car, I choose the color based on how it feels when I look at it - how would you get your robot to behave exactly like me without that key ingredient?
Why would it choose a blue car and how would you get it to choose the exact shade of blue that I chose and not the other shade of blue?
It’s possible what you are saying is possible, and it’s possible it’s impossible - one thing I can say for absolute certainty is that it’s not trivially obvious that it can be accomplished.
I also want to make clear that I’m an eliminative materialist and not really a qualophile(*). Yet I strongly doubt RaftPeople’s ungracious dismissiveness of qualia could possibly be the result of careful enough familiarity with the arguments.
(*)As I said early on, “though ultimately I may be programmed to think I experience qualia when in fact I do not, it is quite a mysterious and compelling illusion.” I strive to be honest in my self-perception (to the degree, as stated before, that I do not believe I am conscious or have free will), and the experience of qualia is the only thing that gives me pause, because, besides the subjective experience of qualia as distinct from information, as an epiphenomenon it seems unmotivated (there is no qualia associated with the building blocks, as it were, and no apparent transition to emergence of qualia as complex patterns develop in our minds), unnecessary (unless you believe a look-up table has qualia, in which case you have other problems accounting for its diversity), contrived (surely qualia is not dense in the set of all complicated structures isomorphic to a lookup-table), arbitrary (why not a qualia for the association “2+2” and “4” when there is for 660nm and “red”?)…
I’m not sure what I think about the ability to communicate this information. It seems like it is dependent on the receiver speaking the same language (which isn’t unusual for communication). If the receiver had the same (as in exact) internal structures and we communicated the information properly, like direct stimulation of neurons - then yes it’s seems communicable.
But a lossy compression of the information into words will of course not communicate it properly as any lossy compression discards information.
I’m no expert in the various philosophical positions and arguments either way - so you are correct, my position is certainly not due to careful familiarity with the formal arguments. But that doesn’t necessarily mean I’m just plain wrong either.
I’m not sure I’m dismissing qualia, and maybe I’m missing a key point here - but I don’t have a problem with internal information that can’t be communicated - none of our communications are perfect, they are all gross approximations.
Replace “red” with “X nm” and “blue” with “Y nm” and so on. Everything else is exactly the same. Qualia contains no information. When you say “I like this shade of blue” you are saying you “I like this wavelength because of all the complex associations I have developed over my lifetime with this wavelength.” The existence of the qualia has nothing to do with it.
So it’s trivially obvious to you that it’s not trivially obvious? In general you’ve been a little too certain of your pronouncements in this thread… I was merely intentionally echoing your own use of “trivial” earlier in the thread, though now that I look back I may be sometimes conflating you with TriPolar, who said for instance “Qualia seems so trivial to me.”
But that case is equivalent to shining the color red into someone else’s retina and claiming that you have conveyed the qualia of the color red to the other person. By stimulating those neurons directly you may conjure up the qualia in the other person’s mind, but there is no way of validating through communication that it was the same qualia! Compare this to an example of the actual exchange of information, whereby you might tell someone that the apple is red (660nm) and the other person can go look at the photons reflected from the apple and verify that those photons are red (660nm). In stimulating the neurons as you suggest, you would be merely reinforcing a connection, for example, between an apple and red (660nm), but you would not be able to establish a connection between 660nm and red (qualia) as opposed to blue (qualia).
Simply replacing “blue” with Ynm doesn’t change anything - we are already working with wavelengths that trigger a chain of electro-chemical responses - don’t mistake my use of the word “blue” for a lack of modelling the problem.
The issue is that not only must the entire state must be duplicated, the calculations for the transition to the next state (assuming we operate finitely enough to have “states” vs continuous) must also be duplicated because even minor deviations could cause future deviations in behavior.
To mimic our brain that closely may result in a similar internal experience no matter how hard you try to make the internal experience devoid of subjective states.
There are times when I make pronouncements in that manner, typically because I think someone/people are simplifying something that I think is far more complex and unknowable than they’ve allowed.
But that doesn’t mean I’m not considering each point as it’s made - I enjoy the process and I’m fully aware I could be making a rookie mistake.
Maybe we need to clarify what it means to “communicate” something.
I think a key item is that both parties must be “speaking” the same language. And every single one of us has a unique language (internal brain structure/states) - similar but not identical. And a second key item is that a pathway for the communication must exist - our pathway for processing words doesn’t allow this communication to happen.
If we skipped the rods and cons and went further into the brain and stimulated specific neurons in a receiver that spoke the same language, then I think communication is possible.
I do not think so. I think you are defining “speak the same language” in a way that is tautological. Suppose A wants to communicate to B that he sees the qualia red. A does this by stimulating region X of the brain of B, a region which A believes should induce in B the qualia red. But suppose that B sees blue rather than red(*). A is satisfied that he “communicated” red to B, when in fact he has not. Now, you want to say that A and B are not speaking the same language unless A knows a priori which region of the brain of B will induce in B the qualia red. This is equivalent to B having already affirmed that said stimulation produces qualia red. In other words you are arguing that qualia cannot be communicated unless the desired exchange of information has already taken place. That is a tautology.
(*) Keep in mind that reference to a known wavelength of light will not solve this problem. A can shine red light into the retina of B and use fMRI to locate the region X of the brain of B that lights up in response. A can therefore map the regions of B’s brain that correspond to which wavelengths of light. But A has no way of knowing whether the qualia in B corresponding to a given wavelength of light corresponds to the qualia in A corresponding to a given wavelength of light.
Clearly I can’t really say yes or no because there are too many unknowns.
Maybe consciousness requires a special ingredient which is a chemical soup bathed in an electromagnetic field. Or maybe we just need the ants.
Because it’s such an unknown, I think it’s a stretch to draw too many conclusions.
It seems like, based on what we know so far, it’s an intermingled mess of simultaneously semi-disjunct and semi-connected parts.
A string of symbols is a representation of something in some language that is relative to the receiver, but is not actually that thing it’s representing.
Quale module:
I assume you use that term very loosely for discussion purposes only - I don’t think there is a module.
But having said that - I think our brain can generate a color red experience (dreams) but I don’t think that our brain has the capability for words to trigger the proper neurons to get the same response. Again this doesn’t seem like big deal.
But this seems to first make the term “possible” almost devoid of any meaning, and then draw a conclusion.
It’s possible that if I stand in my coffee cup I will be transported to furthest most habitable planet in the universe - therefore there are habitable planets in the universe. That doesn’t seem to be very helpful.
My primary initial objection to the p-zombie idea was due to my very literal reading of “just like a human” but without qualia. I read that as the brain structure was identical but for some reason qualia suddenly didn’t exist - whatever our brain is doing today, you can’t leave it physically the same and wish away some of it’s properties.
So, now that I understand it merely must achieve the same external behavioral responses, then my position changes to the following:
It’s possible that a completely perfect mimic of my behavior requires this attribute of internal subjective state, if it’s missing the simulation will deviate. I don’t think we know enough to say for sure one way or another.
If we can exactly mimic my behavior with lookup tables, that still doesn’t speak to our particular setup - it’s possible (and likely) that whenever you create an internal structure like ours you end of with these subjective internal states.
Having said all of that, I’m not sure if I’m agreeing with, disagreeing with or merely exploring various points. Here’s a summary of my position:
We are made up of matter/energy
Our specific brain structure results in consciousness and subjective internal states that can’t be communicated with words
With the proper receiver (identical internal structure) and proper form of communication (direct stimulation of proper neurons) we can transmit precisely the internal information we have - but speaking the exact same language is key.
Maybe, maybe not. I’m using it in a “transmit” sense - and to be completely exact we need to really be completely exact in every sense of the word regarding transforming the target machine into the proper state. Once that is completed, the verification of communication is the verification of the state of the other machine.
When we communicate any idea with words it’s an approximation that really isn’t completely communicating what we think it is.
If there are definitions of communication that indicate I am wrong here, let me know.
Verifying the “state” of the other machine is not the same as verifying the transmission of the indented qualia. It doesn’t alone even verify that the qualia exists.