If you have immersed yourself in computational cognitive science for a while, and come to believe that the mind might be a computer program running on the brain as hardware (or even if you are older, and were raised on behaviorist psychology, whereby everything people do is really just a bunch of conditioned reflexes), then the concept of a “philosophical zombie” soon comes to seem very plausible, or even inevitable. It becomes difficult to understand how we are not all philosophical zombies. Some people (in order to hang on to all that ‘science’) convince themselves that we all, in fact, are zombies, and that consciousness is an illusion (despite the paradox that, by their lights, there is no-one there to be deceived by the illusion). Those who (like Dave Chalmers) are nevertheless convinced that, at least in their own case (and, out of politeness, other people’s cases too), there really is conscious experience, either indulge themselves in contemplation of the perfect intractability of the “hard problem”, or do their best to put the matter out of their minds, and get on with trying to do cognitive science and neuroscience without reference to consciousness (and the more they seem to succeed, the deeper they dig themselves into the hole).
The intellectual wrong turning happened almost exactly 100 years ago,* when J.B. Watson first started persuading people that psychology could and should be done without any reference to consciousness (and that it would be a lot easier to do methodologically sound psychological experiments if it were done that way). This error was only compounded in the mid-20th century when a majority of psychologists persuaded themselves that they had overcome Watson’s philistinism by conceiving of the mind as a computer rather than a bundle of reflexes. Either way though, there is no room for consciousness in psychological theory, and if you convince yourself that you understand and believe in computational psychological theories, it is easy to imagine, indeed, almost impossible to avoid imagining, that the mind, and the brain, and all the human behavior they control, could quite readily carry on in just same if there were no consciousness there at all.
I think you are right,** grude**, that zombies are not a real possibility, and that anyone with a mind not corrupted by too much cognitive theory can readily see that, but to admit that is to admit that most of our currently accepted cognitive and neuroscientific theories must go to the wall with the zombies. (Some of us are attempting to construct a different sort of cognitive neuroscience that will not imply the possibility or actuality of zombies, but, as yet, we are a small and often derided, though growing, minority in the scientific community.)
¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬
*Well, except that, in a way, it happened way back in the 17th century, when Descartes persuaded himself, and others, that most aspects of mentality (imagination, perception, emotion, memory, muscular control, etc.), apart from consciousness and a few other things like like language understanding, ratiocination, and free will, could be explained in terms of the brain operating as a sort of fluidic, hydraulic, computer (since he did not know about electronics). In order to explain consciousness, and those few other things, he postulated an immaterial soul beyond the reach of scientific understanding, that occasionally perturbed the workings of the brain a bit, but very few people ever much liked that idea, and even those who did could see that most behavioral functions would carry on the same without it. Computational cognitive science is just the Cartesian research program pushing towards its limits by attempting (via more sophisticated ideas about computation) to explain everything it possibly can (except pure consciousness, which it attempts to ignore) in terms of neural computation. It can’t possibly succeed, because of the way Descartes set up the game, the metatheory, to require an immaterial soul in there controlling the machine, but for those who persuade themselves it can succeed, it becomes obvious that if you just took the minimal, residual soul (i.e., consciousness) out of the machine, its behavior would not be discernibly different, and you would have a philosophical zombie.