The p-zombie thought experiment is quite literally that, ‘just making noises’ without meaningful context, since the concept it describes is entirely unrealistic; by considering both the physical and unphysical facts as black boxes and switching them, the zoombie argument demonstrates this. We can’t consider the physical facts of consciousness as anything other than a black box at the moment, since we are nowhere near understanding them; in due course that may change, and I suspect it will.
Appealing to another set of facts, the non-physical facts, is just multiplying entities needlessly.
If the zombie argument is incoherent, then, since they have the same logical form, the zoombie argument is so, as well.
We do understand the physical facts going on in the brain at least down to the level of molecules; maybe not in their detail, and their interplay, but well enough for qualitative judgments. Our understanding is the same with, e.g., the question of how pictures appear on a computer screen: I certainly can’t imagine the whole story in full detail, but enough of it to take myself to know how a certain pixel lights up—there is no mystery left. Not so in the case of consciousness: imagining the story at a similar level of detail contains no hint of experience; indeed, the notion of experience seems diametrically opposed to what goes on physically.
Besides, we have other arguments for the conceivability of zombies: for instance, I sometimes used to sleepwalk. So, there is certainly a being that is physically identical to me, but nevertheless unconscious. Of course, that’s a triviality: the same thing is true when I am, for instance, in deep dreamless sleep. But the sleepwalking me is capable of exhibiting at least some of my behavioural traits; but there does not seem to be any in principle reason that such a being should not be able to exhibit all of them.
A further possibility is to imagine a simple mechanical computer. You can easily construct something like a NAND-gate from sticks and rivets; and any computation can be performed by a network of such gates. But then, these sticks and rivets-machines ought to be able to give rise to conscious experience; however, here, we know exactly how the physical basis works, and can imagine it, and nowhere are we necessitated to think about experience.
In any case, whether by imagination of the relevant physical processes, or by plausibility arguments from cases of sleepwalking, or by constructive arguments, there are a lot of ways to arrive at a conception of a zombie that’s not a black box, but where it seems to me that everything that is physically relevant is readily imaginable. Thus, I am indeed convinced that I can conceive of a zombie.
Anybody who wishes to resist that conclusion, as I have pointed out, should then be able to show that I am not, in fact, able to conceive of a zombie—that is, like in the case of flying pigs, or in the case of water, show where I go wrong in thinking I can conceive of something that in fact is inconceivable.
But, and that’s the catch, merely pointing out that something unknown might doing we don’t know what doesn’t accomplish anything; in fact, that something unknown is doing we don’t know what is very likely, but finding out what’s doing what is exactly the point of the whole exercise. Without such an answer, I am entirely justified in believing that I can conceive of a zombie.
This is an intuition pump, and I think it is quite wrong. If we were able to create a human brain from stocks and rivets, identical in function to the physical operation of human brain, then it would be conscious. In fact, make it of sticks and rivets on a small enough scale it might be faster and smaller than a human brain, and fit inside a matchbox (depending on how many analog processes need to be modelled).
It is the sophistication of the program that we can’t grasp at the moment; a program that can perceive itself from the inside. I don’t think that appealling to nonphysical phenomena helps the situation at all - the human brain is already the most complex system we know of in the universe- how and why would extending that system into an arbitrary unknown substrate help matters?
FWIW, I’m bowing out of this discussion. Too much metaphysics, not enough evidence. One last word: HMHW, your argumentation runs into the same problem as all such argumentations that deal with a metaphysical, undetectable something: how do we know we’re still talking about anything in the real world? It’s like TAG - interesting, hard to find flaws in, but ultimately we have no way of determining whether or not it’s actually valid or whether what we’re referring to is anything to do with reality.
Well, then point out where. Because my problem is that for the life of me, I can’t see the flaw. In other, similar arguments, I can do so quite simply—in the case of water vs. H[sub]2[/sub]O, where a philosopher might, as RaftPeople proposed, wish to argue that water isn’t H[sub]2[/sub]O, he can be easily shown—by just educating him about the properties of water and H[sub]2[/sub]O—that he wasn’t actually conceiving of the right stuff, so to speak. He wasn’t really talking about H[sub]2[/sub]O, because he didn’t know what H[sub]2[/sub]O is. But this strikes me as being wildly implausible in the case of physical and phenomenal properties: I know both what c-fibers firing is, and what pain is, and they are nothing alike; indeed, their properties seem mutually contradictory.
You want to try to hide the issue behind some veil of complexity (it’s all complicated, so maybe something happens somewhere that we’ve missed so far, I mean who knows, right), but in truth, the relevant physical processes associated with at least simple percepts are themselves quite simple: there’s pain whenever c-fibers fire, and whenever there’s pain, c-fibers fire. So we should say that pain is c-fibers firing, in the same sense that water is H[sub]2[/sub]O. But even though we completely know what goes on when c-fibers fire, we don’t thereby know anything about pain. This is radically at odds with the case of water: knowledge of water gives us complete knowledge of H[sub]2[/sub]O.
Only under the assumption of physicalism (or rather, functionalism, but we need not make too fine a distinction for our purposes). But this assumption is exactly what the argument questions. Again, in such a case, we completely know what happens on the physical side; we have a complete and gapless explanations for all behaviours of our machine in terms of mechanical parts acting on other mechanical parts, sticks pushing on sticks, and the like. We can imagine the complete chain of events that occurs to make the machine behave in a certain way, and at no point on this chain do we have to even think about consciousness; whereas, again, in the case of H[sub]2[/sub]O and water, as long as we know what both are, whenever we think about one, we also think about the other—it’s unavoidable. That’s what necessary identity means.
I don’t even know what that would mean.
Because we could, for example, add properties that are in themselves experiential—this is, in principle, nothing different from when we discover a new force, we discover that (some) elementary particles have properties pertaining to that force. So, when we only knew electromagnetism and gravity, we could imagine describing particles only in terms of their (electrical) charge and mass; but when we found out that there were additional forces, we didn’t try to reduce them to the properties we already ascribed to particles, but we endowed them with new ones, such as colour charge for the strong force, or spin in order to explain certain facts about particle statistics, and so forth.
The step to property dualism is really not any more radical than that, other than that the properties we add are experiential—that is, they’re in some sense building blocks of experience, rather than building blocks of matter. This step, or something like it, is forced on us if it is indeed the case that experience can’t be accounted for using the properties we already ascribe to matter—but of course, from this viewpoint, there is really no reason to expect they do. We so far simply haven’t taken into account all the properties present in nature, no big deal; this is not any more shocking than the discovery that electromagnetism can’t account for the stability of atomic nuclei.
Well, it doesn’t. The something that I propose is, in a sense, the only thing that we ever directly have any evidence of, the only thing we immediately know, without having to perform any inferences—our own experience. There’s two options: either, that experience is explicable in terms of physical properties. If so, fine. But there are great difficulties in doing so, and many arguments to the effect that it can’t be done at all (while the arguments purporting to show that it can be done usually just boil down to ‘it’s worked in the past’ or ‘I can’t imagine an alternative’ or other appeals to emotion or consequences). So the logical thing to do is to investigate other options, at least as long as either these arguments have not been thoroughly debunked, or somebody exhibits at least a rough sketch of an idea of how it could possibly be that we can get subjective experience from physical processes. As long as nobody does so, arguments demonstrating the opposite should at least be taken seriously (after all, they might be true!); no problem is solved by just sticking your fingers in your ears and hoping that something will somehow come around and make it go away.
If we were able to create a human brain from sticks and rivets, identical in function to the physical operation of human brain, then it would be conscious.
[/quote]
[QUOTE=Half Man Half Wit]
Only under the assumption of physicalism (or rather, functionalism, but we need not make too fine a distinction for our purposes). But this assumption is exactly what the argument questions.
[/quote]
Here’s what a functionalist intuition pump can tell us about qualia and consciousness. It seems reasonable to imagine that these tiny sticks and rivets (Eric Drexler’s ‘rod logic’ automata, perhaps) would be small enough to replace individual neurons in a human brain at some point in the future. (If scientists don’t do experiments like this in the next few hundred years it will probably be because they have been made illegal by some ethics comittee, but lets assume they try it at some point).
As the neurons are replaced one by one, eventually the brain will be entirely replaced by mechanical or electronic replacements. Would such an entity feel qualia? How about one which has been half replaced? one quarter replaced? When do the qualia fade away, and does the conscious being inside the head ever actually notice the difference?
Given what we know about neurons these days you would need to go much lower. The neuron is a network all by itself with multiple types of preprocessing (utilizing various rates of spiking on the dendrites) of the incoming signal handled by the dendrites as the signal moves down the tree toward the main cell body. Then there are the spikes that travel backwards up the dendrites (that may or may not play a key part in learning). Also there is epigenetic modification of the DNA that is used to maintain the strength of signaling at the synapses.
I agree 100% that such an entity will possess the same qualia as we do. However, this is not a conclusion that (exclusively) supports physicalism: if the mechanical replacements have the necessary non-physical attributes, then also under a property dualist conception will the replacement be conscious—this is only a way to build a zombie if we could somehow make sure that whatever we replace the neurons with would be functionally identical to them, yet lack the non-physical properties giving rise to consciousness. But of course, since we already know that when certain functional structures are present, they are accompanied by conscious experience, it only makes sense to expect that all functional isomorphs are also isomorphic regarding the non-physical properties.
You may know that David Chalmers has written a fascinating paper on this subject—‘Absent Qualia, Fading Qualia, Dancing Qualia’, arguing, like you do, that the procedure of replacing neurons is unlikely to either suddenly turn off experience, or have it gradually fade. Of course, Chalmers is a property dualist—in his view, information has both a physical and a mental aspect, and thus, whenever information is processed in the same way, the same phenomenal experience accompanies it. So the idea that a functional isomorph is conscious is quite compatible with property dualism, since there, all information processing is identical.
If you mean ‘souls’ in the sense of some nonphysical, supernatural substance, then no. If you mean ‘souls’ in the sense of, they are subjects of conscious experience, then yes, why wouldn’t they—after all, we’re just conscious robots ourselves.