is materialism incompatible with "me"?

That seems pointless; it presumes it’s own answer. By definition such a thing as an unconscious creature absolutely identical to me materially down to the atom could only exist if there was something other than the material. It’s vitalism under another name. And it is intellectually barren since it doesn’t say a thing about what this nonmaterial something is, or provide any method of finding out, or of proving such a claim to be true or false. “A wizard did it” isn’t a useful philosophy, or a very interesting one.

A realistic P-zombie would look like me externally but have differences in the brain.

Personally I find it easy to conceive of. I do think that it would probably be harder to make than a normal person; it would have to be running some sort of complex simulation that can fake consciousness without being conscious. It would probably actually be more complex than a normal person; an actually conscious being doesn’t have all the overhead of faking its experiences. It might need a more efficiently designed brain to avoid giving itself away by looking like a Giant Head Alien.

That’s where I see the problem: doesn’t faking consciousness amount to the same thing as being conscious, in the end? Consciousness is distinguished, for instance, by the awareness of one’s own mental processes; if you want to give a convincing account of this, you’d have to be able to answer questions regarding these mental processes the same way a conscious being would. But how do you do this without whatever process you utilise essentially amounting to just this awareness?

The same goes for the sense of self – how do you give a convincing account of what it is like to be yourself without actually ‘knowing’ (in the simple sense a computer, for instance, knows its data) what it is like to be yourself? And if you have access to the information about what it is like to be yourself, how is that any different from having a sense of self – from being in a conscious state?

How is, in short, simulating the self-referentiality at the heart of conscious experience any different from actually exhibiting this self-referentiality?

By creating a system that can perform the same monitoring functions without being aware. We are after all only aware of a minority of what goes on in the brain; our brain is full of systems monitoring other systems that aren’t, as far as we know, aware.

It does occur to me that if your viewpoint is true that brings up the disturbing scenario that the brain must be full of other consciousnesses, which just don’t have the ability to communicate.

I believe in the committee model of consciousness, so that doesn’t sound so off-base to me.

Well, speak for yourself, Bucko!: I am conscious and have taken complete control of parts of my brain typically left unmonitored. For one, I know exactly what my postganglionic parasympathetic fibers are up to at all times, and I simply refuse to let them secrete acetylcholine when I’m in the presence of scantily clad ladies in locations inappropriate to show wood. :cool:

Is there actually anything else that goes into awareness other than ‘monitoring functions’? Let’s look at a zombie’s phenomenology, such as it is: I, the interviewer, ask it a question – “What’s your favourite ice cream flavour?”; it could then answer, for instance, “I like vanilla best.”

Now, what prompted him to answer in this way? Obviously, it must somewhere have formed the intention to speak – unconsciously, since it’s a zombie. That’s perfectly simple, it could have rolled some internal dice, and they came up vanilla, for instance. Nothing fancy.

However, then, I could proceed to ask: “Why did you just say that you like vanilla best?” – And then, things get a little hairy for an unconscious creature. It could say, “My internal die roll came up vanilla,” but then, it would hardly be the convincing simulacrum it’s meant to be. No, in order to be convincing, it would have to be able to give an account of its internal processes just the same way a human is – it would, as you say, need some sort of internal monitoring system.

Now, what would this monitoring system have to do? Well, for one, it would have to record the intention to speak in some way. It would also have to record why this intention was formed – what input was being reacted to. There would have to be a whole host of annotations to every given speech act intention in order to react convincingly to question referring back to previous speech acts.

So, through this monitoring system, within the zombie, there would have to exist knowledge of the performed speech act, knowledge of the intention to perform the speech act, knowledge of the reasons for forming this intention, and so on – all these things amount to the zombie representing the speech act to itself. If there’s anything more to being aware of the thought “I like vanilla best,” I don’t know what.

The same goes for its ‘sense of self’: “How do you feel?” – “I feel well.” – “Why did you say you felt well?” forces the zombie to become self-reflective with regards to its own self; it would have to represent itself to itself, if it were to be absolutely convincing. How else could it answer the question, if it didn’t know that the ‘I’ in ‘I feel well’ referred to itself, and that ‘feeling well’ refers to the state this I is in (note that neither of these pieces of knowledge requires consciousness on its own; they could easily be nothing more than bits stored in a computer’s memory)? And what else other than this self-referentiality and the reference to the state this self is in does a truly conscious creature have to work with?

And even if you remain unconvinced, the zombie surely could convince itself – it would have an internal experience of its thoughts, and an experience of this experience (and so on); it would have an internal experience of itself, and an experience of this experience (experience being nothing else but a thing’s representation to the zombie, and this representation being represented to it in turn; like seeing something, and having the knowledge that you’re seeing something). It would claim to be conscious with the same justification as you or I do – because it seems to it like it is.

It is a bit like with money – if I drew a couple of numbers on pieces of paper, this would surely be fake money. However, if the requirement is that this fake money be indistinguishable from real money (and I don’t mean in appearance), then this means that I could exchange it for goods and services, would receive change, could take it to the bank, in short, I could do everything with it I can do with real money. In what sense, then, would the money still be fake?

We’re also not able to perform speech acts in such a way as if these systems were aware, so I’m not really seeing the problem.

As I said, those processes lack the self-referential capabilities of consciousness; their manner of reflectivity limits itself to that of a control cycle.

Or, a good fake. It doesn’t need to actually say that it rolled dice, or express any actual preference if any. It just needs to come up with an answer and keep to it consistently. It doesn’t need to express its actual internal processes; in fact it can’t or it would immediately come across as what it is. It could have a whole long list of fake opinions and desires. I think it’s important to remember that such a zombie is ***by definition ***dishonest or it would be immediately obvious what it is.

This to me by the way answers the questions of “Why aren’t we p-zombies? Why should we assume that other people aren’t?” Answer: Because evolution had no reason to select for such an elaborate, careful system of fakery when actual self awareness and the ability to express it is so much more straightforward.

If the goal is just to fool some of the people some of the time, that might be possible. But it can’t just ‘come up’ with an answer all the time, and remain perfectly consistent – that in itself would require the same self-referentiality that’s necessary for conscious. There’s no way to pre-determine all possible instances of self-referentiality, and store the required answers; and the task of independently coming up with satisfying answers requires self-referentiality.

Either way, this has become some distraction to the topic; if you’re interested in where I draw my argument from, here’s Daniel Dennett’s take on the matter (PDF).

Well, does it need to be perfectly consistent? We non-zombies aren’t after all. It just needs to fake it well enough to pass. And self referentiality may be important to consciousness, but it doesn’t automatically result in it.

Perfectly consistent with the phenomenal account a truly conscious being would give; and yes, it has to be that, because if it weren’t, its deception would be in principle detectable.

Well, what would you say are sufficient criteria to consider a mental state conscious?

Okay this wasn’t addressed to me, so you may have no interest in hearing my answer, but too damn bad!

Intelligence does not require consciousness.

Intelligence, whether it be human animal or machine, is the creation of solutions to novel salient problems. Intelligence can be within narrow domains, domains of no salience to humans, or applicable across a wide variety of domains. An ant colony as a super-organism has the ability to solve novel salient problems that would perhaps stump humans, yet, to the best of our knowledge, has no consciousness. Does one need assume that an octopus has consciousness because it is in many ways a very intelligent creature? And some computers are superior to most humans (or even all) in some very narrow domains. Human intelligence is applicable across a wide variety of domains and is adaptable to a wide variety of situations, including changing social ones; human intelligence makes use of consciousness, a sense of “self”, a sense of agency or “free will”, as it accomplishes that task, but intelligence itself does not require it.

This has direct bearing on the p-zombie detection issue as it is being discussed. An entity could theoretically be intelligent enough to identify humor and to respond with the correct action pattern. It can be intelligent enough to solve a variety of social “problems” with the exact same consistency that a human with a conscious self does and programmed well enough to respond in a way that would fool any human observer without the need to evoke self-awareness. Self-awareness just happens to be part of how *we solve many of those problems It occurs, I believe, as a consequence of the way in which the information within the dynamic systems that are our minds is handled, in ways that Hofstadter referred to as strange loops, tangled hierarchies in which the system includes its ever changing self as a member of the set of objects that it must keep constantly updated. Others have expressed similar computational correlates of consciousness, my favorite being Steven Grossberg’s Adaptive Resonance Theory, in which resonant feedback loops can become conscious states.

Consciousness, that is, a sense of self, both requires and allows a sense of agency and of free will. (Which makes free will just as real and just as delusional as is “the self”.)

Insofar as “I” exist (and here Descartes was right, I think I am, therefore I am) it is never in conscious existence the same exact “I” from any moment to moment even though it perceives itself to be so.

If an exact copy of any “I” was created from some “I at t=1” they would become different "I"s as soon as the first microsecond passed. “I at t=2” and “I at t=2” would be different individuals from each other, but both would rightly and equally claim continuity and identity with “I at t=1” Of course “I at t=2 and beyond” would be reluctant to die in order that “I at t=2 and beyond” should live as they had already diverged into different selfs. Different selfs with a shared history up to a point and a great deal in common but still already different.

How that fits into your choices, TibbyToes, I do not know, but I refuse to allow you to disallow it.

*Actually it may not be as much of how we do it as we think though. There are many studies which document that we often add our conscious “reasons” after the fact on top of what our processing units have accomplished under the surface for other reasons unknown to our “selfs”.

BTW, a description of how Grossberg’s resonant states may underlie consciousness can be found here (Section 5.4, on page 23 by the computer or 21 acc’d to the article itself) The article contains an excellent summary of ART and, for the interested, an application of ART to the issue of autism that I for one am quite fond of.

It is also of note in this regard that 300ms delayed global reverberations seem to be a neural correlate of consciousness.

And some cites regarding my asterisk about how we may often be deluding ourselves that our decisions are always the result of previous conscious agency … in particular see the one near the bottom: Daniel M. Wegner (2003). The mind’s best trick: How we experience conscious will.

I would say “I” live on as both copies, whose consciousnesses each become unique from the moment of divergence. I have a vested interest in beings with my memories-to-date continuing to live. I don’t see any logical inconsistency in this view. In fact, options #2 & #3 seem pretty equivalent, any differences being trivially semantic IMO.

HMHW, that would depend on what you mean by ‘intelligence’, and precisely what the candidate is or is not conscious of. Deep Fritz seems pretty intelligent to me, but all he would be conscious of is symbolic chess moves, not a torrent of information from the real world with a crucial feedback element. I would guess he’s intelligent, but barely more ‘conscious’ than an automatic door or an early Cambian trilobite.

Well, I certainly didn’t mean to exclude anybody from answering, so thanks for taking the time.

This has been, just to state that upfront, my own opinion for the longest time; however, lately, I’ve been wondering (mostly thanks to the strength of the ‘zombies aren’t unconscious’-argument).

But can one really draw an equivalence here? Ultimately, ant colony intelligence, octopus intelligence, and computer intelligence are very limited in their scope – their intelligence is not nearly as adaptive as human intelligence is, but specialized to a very narrow domain. Like being able to draw the cube root of a nine digit number really fast – surely an impressive feat of mental gymnastics, but is it indicative of intelligence? (Perhaps it depends somewhat on your definition; however, I don’t think I bias the discussion too much by requiring general intelligence to be at least capable of the same feats as human intelligence is.) And are the powers of ant colonies, octopuses, and computers not ultimately of that sort?

That’s what I’m wondering – if not maybe any intelligence, once it becomes powerful enough, gives rise to the capacity of representing itself to itself, and representing its own inner processes to itself. Sort of like Gödel incompleteness is a property of any formal system once it is sufficiently strong. Would any intelligence as adaptable and powerful as human intelligence not also be able to give rise to strange loops or whatever else the computational correlates of consciousness might be, by virtue of being able to direct its powers onto itself? (Thanks for the links you posted, by the way; those should be good for a few hours worth of procrastination!)

I think the classic research in that area, just in the unlikely case you’re not aware of it, is Benjamin Libet’s; his results very strongly imply that if there even is any choice in deciding upon our actions, it is made long before we become conscious of it.


Anyway, to get back on the subject of p-zombies. I think, in a way, the task of faking consciousness is analogous to the task of faking memory. Surely, a zombie could have the correct answer to the question “What was your answer to the last question?” pre-stored somewhere, if we’re not bothered by practical requirements for a moment. But how could it know that this answer is in fact the correct one to select? In order to perform this selection with some reliability, it would need to store information about prior events in the conversation – but ‘faking’ memory in such a way is indistinguishable from and equivalent to actually having memory! Sure, every once in a while, a zombie might hit on a satisfying answer by pure chance, but that’s certainly not something you could ascribe any great reliability to.

In a similar vein, since the zombie would not reliably know what to talk about when I probe his second-order thoughts and beliefs, he would need to actually ‘simulate to himself’ the experience of having these second-order mental states – but to him, this would be indistinguishable from and equivalent to actually having them.

Again we run the risk of defining our question in a way the presupposes the answer, just as I believe the whole p-zombie hypothetical does: if we define the only intelligence as meaningful as being the human type, then indeed it will very likely be the human type with consciousness and all that. But just as it is for p-zombies - the question becomes trite. It also reveals a convenient self-centeredness that will fail to recognize any other sentience if we do see it, if it evolved to meet the demands of a different set of salient problems than we do. I instead argue that human intelligence is just one possible sort and that we need to get over ourselves some.

In any case there is no reason to presume that an intelligent (ie novel problem system) system cannot consist of multiple domains of intelligence to solve a wide range of problems, including a executive program (“one to rule them all”) which is tasked with the problem of deciding which domain a particular problem belongs in. There is not even any real need to include itself as an important member of the set under analysis social problems are one of the salient issues. Thus I am not completely convinced by claims of cephalopod consciousness. And btw, novel problem solving means more than having a memorized list of answers. New problems with novel solutions.

It indeed was that Incompleteness bit that got Hostadter going as his first articulation of that strange loop concept was his well known GEB.

I do believe that consciousness and “selfness” is a contiuum. The more levels of information processing that are involved in those “tangled hierarchies” the complex the strange looping self referntiality, the more conscious a system is. The question is both the complexity and the organization of the information processing. For all we know ant colonies as a super-organism experience a form of consciousness as a quale, and for all we know so do some artificial systems. I do not think we have the ability to recognize it if we see it other than by defining it as behaving like we do.

BTW, thank you for that Libet link. I had completely not recalled his name and was indeed looking for those studies to link to with no success.

I am aware of the difficulties of defining intelligence, and the dangers of biasing the discussion in such a way as to limit the options excessively; but, let’s look at things from a different vantage point: it’s widely believed that anything that can be computed, can be computed by a universal Turing machine. If now our brains have just that computational capacity, it’d mean that, in principle, universal Turing machines can be conscious (since in any case a universal Turing machine can always simulate any other Turing machine, provided it doesn’t run out of time and memory[sup]1[/sup] – this is more or less what I mean when I talk about the ‘adaptability’ of human-level intelligence); whether or not they are, is then basically just an issue of software.

Thus, human-level intelligence should have at least the capacity for consciousness. If there then exists some means of communication with such an intelligence (and in principle, it seems that there would have to be one, their Turing machines being, however clumsily and slowly, generally able to simulate ours, and the codes they use for information interchange, and vice-versa), one could use the same tactics I’ve (following Dennett) outlined in regards to a philosophical zombie to force it into self-referentiality – to cause it to have second order beliefs, and to represent things to itself.

The question is, then, can something computationally less than Turing-complete be considered intelligent in any way? Frankly, I don’t really know what the limitations would be to such an intelligence (it probably wouldn’t be able to compute the values of Ackermann’s function, for instance), but I do have a feeling one would run into some strange inabilities; also, it would seem to be rather easy to turn anything that could be reasonably called ‘intelligent’ into a universal Turing machine by giving it a pencil, a stack of paper, and some symbols to manipulate.

[sup]1[/sup]This I could see as a sticking point – we may simply not have enough storage capacity or time to engage in any meaningful exchanges; however, I don’t think this detracts from the ‘in principle’ validity of my argument.

Since we’re pretty invested in this particular thread already, I don’t see any reason to start a new one, as I mentioned doing. I would however, like to restate the thought experiment, mainly to invite participation from new people, but also to have us all on the same page, leaving as little ambiguity possible as to the details and parameters involved, since I expect the devil here is in the micro-details.

I prefer not to switch to the transporter, as DSeid suggests, because I believe we want the flexibility to be able to include more than one “survivor”, if needed, and because it’s easier to set more restrictive parameters this way.

I also think it would be of interest to all if, for those of you responding, you could tell us what worldview you ascribe to that influences your answer (s)(e.g. cognitive science, physicalism, theism, Buddhism, new age-ism…prismism…anti-socialism…whatever!). :smiley:

I’ll follow with analysis, charts, graphs, overhead projector slide presentations, testimonials and celebrity endorsements—eventually, but for now, let me just present the experiment and solicit your response.

** You or your Copy, You Decide**

Primary Purpose: To ascertain whether there is any qualitative difference, from your perspective, between you in the future and an exact copy of you.

Players:

  1. You, T-0: The original you at time zero (noon).
  2. You, T-10: The original you 10 minutes into the future (12:10pm).
  3. Copy: An exact duplicate of you made 10 minutes into the future (12:10pm).

Method of Duplication: From an incredibly detailed blueprint, mapping the exact composition and arrangement of your fundamental particles in sum total, your duplicate will be fabricated by some type of third party assembler, at exactly 10 minutes into the future (12:10pm), using its own box of tools and sub-atomic ingredient deck. You are mapped and duplicated instantaneously. At no time during the duplication process will physical contact be made between you and your copy. You are confident that the process is safe and that your copy will be conscious and retain all memories accrued by you up to the point of duplication.

Set-up: You’re told (and you believe) that your copy will pop into existence at 12:10pm and that he will be given $1-million; you will receive nothing. Unfortunately, only one of you is allowed to live past the point of duplication and you must choose, now, which one is to live, or opt out due to having no vested interest.

Assumptions:
• You like money and have no problem having someone give it to you.
• You’re not suicidal.
• Your feeling of self-preservation trumps any benevolent feelings of sacrifice for the survival of others.
• You appeal to logic over emotion.

So, You, T-0, what is your choice?

  1. You, T-10
  2. Copy
  3. It doesn’t matter to me one way or the other.

The reasoning behind your choice and your belief system is also appreciated.

Well, as stated, how could I allow you to disallow it!?! It may not be the correct model, but I sense no illogicality in it. We may have to get under the hood of that “as soon as the first microsecond passed” clause, however, and maybe even play around with the arrow of time, to really get to the crux of the matter. But, we can get into that later.

If I were, as the hypothetical states, thoroughly convinced that the cloning process is perfect, then I’d take the million (i.e. kill off original-me) in a heartbeat.

However, that might be a hard sell. Presently, I don’t put too much stock into them, but there are several quantum consciousness models, in which conscious states depend on underlying quantum states. In such a scenario, perfect copying would simply not be possible, not just because of Heisenberg’s uncertainty (we’ve got the Heisenberg compensators for that, after all!), but because of the so-called ‘no cloning theorem’, which is a somewhat stronger result prohibiting the creation of copies of arbitrary quantum states.

Under such a theory, it would be perfectly possible to subscribe to physicalism, and yet deny that the creation of a perfect copy is possible.

So, if there weren’t very strong arguments ruling out quantum mind models, my decision would be made considerably more difficult.

No debate here. But that in no way implies that it must. It would likely develop a consciousness if either having a consciousness had utility in solving salient problems or, no matter what its components consisted of (neurons, circuits, ants, people …) it was organized (?self-organized?) processing information in a manner that caused that quale to exist - which again I attributed to those tangled hierarchies of strange loops.

Given the definition of intelligence that I believe is the only meaningful one, the one I have articulated, then I would say the answer is clearly yes. Even if that way is very alien to human intelligence.

Let me put it like this: I believe that human society has an intelligence of its own, greater than the sum of its parts even. Does it follow that it must have a metaconsciousness of its own? (Leave aside the discussion of whether or not it might.)

Tibbytoes,

You see right there you bias the game. Number 1 is “you” for that microsecond. 2 and 3 are both neither “you” and both “you” to the same degree. I doubt that either the one that has physical continuity or the one who had been physically discontinuous would gladly lay their lives down for the other.

Let’s put it slightly different way to clarify why the choice is not “you” or “the copy”. An egg is fertilized and at some point it divides creating the original developing zygote and an exact genetic “copy”. You are separated at birth and 30 years go by with no relationship between you. Now you are given the option to have your “copy” given a million dollars if you agree to allow yourself to be killed. Otherwise your twin, your exact genetic match, will be murdered. What do you do?

Is your answer different then your copy circumstance? Than the transporter one? And if so, why?