In their strongest version, yes, the possibility of p-zombies existing entails that physicalism is wrong, because such a p-zombie is an exact physical duplicate of a conscious person, however, lacking any conscious experience. Thus, the physical facts themselves do not determine whether an entity is conscious; there must be ‘further facts’ that do, and hence, the physical facts don’t exhaust all facts about the world, and physicalism is false.
This is usually made plausible by telling a story such as the following: we can imagine a humongous lookup table that replicates the behaviour of any conscious being (for any finite, but unbounded length of time) by simply pairing the outputs appropriate to a conscious being with the inputs it receives (where the input is always the entire history of the interaction with the lookup-table entity up to the present point; otherwise, you could defeat the entity by simply asking ‘what was the last thing I said?’, or something similar). Most people would agree that such a being isn’t conscious, even though it produces all behaviour that we would expect from a conscious being.
But fine, you say, that’s just one way to set up a conscious being. So all we’ve shown is that in this way, you can produce consciousness-appropriate behaviour, without the attendant conscious experience; but there may be other ways to produce this behaviour that is accompanied by subjective experience. After all, we’re certainly no lookup-table entities. (Note, however, that this already throws up some interesting questions: if consciousness is not instrumental in generating behaviour, then why do we have it? Evolutionary selection only acts on behaviour, so could not select for consciousness. One way to argue is perhaps that while consciousness is not necessary to produce the right behaviours, it is sufficient, and perhaps the processes giving rise to consciousness (and thus, consciousness-appropriate behaviour) are simpler than those producing the behaviour without the conscious experience—certainly, if the lookup-table were the only way to produce the right reactions without consciousness, it couldn’t be selected for, because it’s simply impossible for such a lookup table to exist in our universe.)
However, what we have established is that how an entity—how some region of space-time—reacts to our prodding it, i.e. what behaviours it produces if we, say, ask it questions (or point a gun at it), does not tell us anything about whether or not that entity is conscious, or whether that region of space-time contains a conscious entity. That is, mapping the causal behaviour of something does not establish whether that something is conscious—there always exists a way to give rise to the same causal reactions without any attendant experience. But, all we know about the physical is essentially its causal structure—any material, physical object we know only by prodding it and observing how it reacts to this prodding. Physical facts, or at least, our knowledge of them, are thus reduced to causal facts—say, we do experiments, observe their outcomes (prod some region of spacetime), and create theories to explain the data.
Thus, if we replaced any region of space-time, any physical object, with something that reacts identically to causal probing, we could not tell any difference. Anything we do to that replacement would yield the exact same results as before. So we can take our lookup-table pseudoconscious entity, which we have treated as a ‘black box’ so far, and carve it up into smaller black boxes, each of which has the same causal dispositions as some small part of a human being, and each of which has them without any attendant conscious experience. So we know that the large black box has no conscious experience, and neither has one of the smaller ones.
And we can continue playing this game, making the small black boxes ever smaller—first, they may be regions of the brain—frontal lobe black box, hippothalamus black box, and so on—that have the property of reacting to any causal probe in just the same way as the analogue regions in a human brain do, but doing it without conscious experience—say, by consulting a lookup table. Then just carry on, to more fine-grained brain structures, to cortical columns, to individual neurons, hell, if you insist, all the way down to molecules and atoms. At any point, you can just replace the structure by one that is identical regarding causal dispositions, and hence, physically identical; but if the lookup table at the beginning wasn’t conscious, then neither will any of the smaller lookup tables be.
Now, the advantage is that we slowly get down to lookup tables of a much more manageable size—an individual neuron has a quite limited causal structure, compared to a whole human being. Ultimately, we then have a being that appears to all probes physically identical to a human—say, a specific human being, such as you yourself—but that seems to lack any and all conscious experience, being composed merely of lookup tables (at whatever resolution you think suffices).
We can then, to drive home the point, imagine going back up: a single neuron is, in this picture, nothing but a list of conditions under which it fires. We can join two neurons, to get a more complicated list; join those with a bunch of others, increasing the complexity yet more; ultimately, recreate a great master list sufficient to emulate all your behaviour. But crucially, at no point did something ‘extra’ appear: we started out with a lot of small lookup tables, and proved that it is equivalent to a big lookup table. So that whatever produces consciousness is not in all those small lookup tables—but neither is it somehow in their interplay, at least not necessarily, since that can be replaced itself by a big lookup table. So then, it seems at least logically possible that a being physically identical to you, but with ‘lookup tables’ dictating the behaviour of its neurons (molecules, atoms…)—i.e. with whatever scale you think is sufficient replaced by little black boxes causally identical to the stuff in your head—could exist, without possessing any conscious experience.
But if that’s the case, then physicalism is false: physics does only account for the facts regarding the causal behaviour of an object—a lifeform, a material object, or, most generally, a region of space-time—but consciousness is not captured by these causal facts, as there exist entities with the same causal dispositions, but no conscious experience. Thus, the physical facts underdetermine the question of whether or not there is conscious experience.