Is there actually anything else that goes into awareness other than ‘monitoring functions’? Let’s look at a zombie’s phenomenology, such as it is: I, the interviewer, ask it a question – “What’s your favourite ice cream flavour?”; it could then answer, for instance, “I like vanilla best.”
Now, what prompted him to answer in this way? Obviously, it must somewhere have formed the intention to speak – unconsciously, since it’s a zombie. That’s perfectly simple, it could have rolled some internal dice, and they came up vanilla, for instance. Nothing fancy.
However, then, I could proceed to ask: “Why did you just say that you like vanilla best?” – And then, things get a little hairy for an unconscious creature. It could say, “My internal die roll came up vanilla,” but then, it would hardly be the convincing simulacrum it’s meant to be. No, in order to be convincing, it would have to be able to give an account of its internal processes just the same way a human is – it would, as you say, need some sort of internal monitoring system.
Now, what would this monitoring system have to do? Well, for one, it would have to record the intention to speak in some way. It would also have to record why this intention was formed – what input was being reacted to. There would have to be a whole host of annotations to every given speech act intention in order to react convincingly to question referring back to previous speech acts.
So, through this monitoring system, within the zombie, there would have to exist knowledge of the performed speech act, knowledge of the intention to perform the speech act, knowledge of the reasons for forming this intention, and so on – all these things amount to the zombie representing the speech act to itself. If there’s anything more to being aware of the thought “I like vanilla best,” I don’t know what.
The same goes for its ‘sense of self’: “How do you feel?” – “I feel well.” – “Why did you say you felt well?” forces the zombie to become self-reflective with regards to its own self; it would have to represent itself to itself, if it were to be absolutely convincing. How else could it answer the question, if it didn’t know that the ‘I’ in ‘I feel well’ referred to itself, and that ‘feeling well’ refers to the state this I is in (note that neither of these pieces of knowledge requires consciousness on its own; they could easily be nothing more than bits stored in a computer’s memory)? And what else other than this self-referentiality and the reference to the state this self is in does a truly conscious creature have to work with?
And even if you remain unconvinced, the zombie surely could convince itself – it would have an internal experience of its thoughts, and an experience of this experience (and so on); it would have an internal experience of itself, and an experience of this experience (experience being nothing else but a thing’s representation to the zombie, and this representation being represented to it in turn; like seeing something, and having the knowledge that you’re seeing something). It would claim to be conscious with the same justification as you or I do – because it seems to it like it is.
It is a bit like with money – if I drew a couple of numbers on pieces of paper, this would surely be fake money. However, if the requirement is that this fake money be indistinguishable from real money (and I don’t mean in appearance), then this means that I could exchange it for goods and services, would receive change, could take it to the bank, in short, I could do everything with it I can do with real money. In what sense, then, would the money still be fake?
We’re also not able to perform speech acts in such a way as if these systems were aware, so I’m not really seeing the problem.
As I said, those processes lack the self-referential capabilities of consciousness; their manner of reflectivity limits itself to that of a control cycle.