is materialism incompatible with "me"?

Water being wet is not what I’d call emergent. The wetness of a aggregate of H[sub]2[/sub]O molecules is readily predictable from the electro-chemical properties of the molecules themselves. OK, let’s be clear, I mean “strong emergence” in the “greater than sum of it’s parts” sense. Same for P&T - they are directly related to the properties of the gas atoms themselves.

I believe my philosophy of the mind is in line with that of physicalism, in general, and if I had to choose a flavor, I would align with the worldview of property dualism, or non-reductive physicalism. I do hold one belief that seems to stick in the craw of certain physicalists, however, and that is with regard to self awareness.

My Take:
Zombie: Consciousness minus(–) “a unique collection of memories captured from a unique memory-capturing biorobot” = 0
Me: Consciousness minus(–) “a unique collection of memories captured from a unique memory-capturing biorobot” = Self Awareness

I break the consciousness of your average, non-zombie, person into two separate entities: the acquisition and processing of memories + self awareness. Why split these two things apart, rather than simply linking them together as one holistic process of the mind? Because, in my opinion, they have at least one crucial difference between them: the memory part of consciousness may be replicated with no resultant difference between original and copy (all zombie copies are exactly alike), while replication of “self awareness” does result in a difference between original and any or all copies ("I” vs. “Them”).

I think we can all agree that we perceive continuity of consciousness from one moment to the next. It doesn’t feel like we die when we go to sleep only to wake up as a new person, does it? So, we have two models to choose from:

  1. Self awareness is, each instant, dying and being reborn, it’s a long and continuous chain of newborn consciousnesses reaching into the future, integrating and processing the accumulated memories of a growing list of dead “you’s”. There is nothing unique about “you” and “you” may be recreated at any time or place, willy nilly, with the proper arrangement and array of particles.
  2. Self awareness is exactly how you perceive it, one continuous process that sparks on some time in your third trimester and extinguishes at brain death (unless transferred to a new matrix). There may be any number of duplicate you’s, each a valid person with their own spark of self awareness, but there is only one “you”.

Neither choice breaks any law of nature, as far as I know, but only one is correct. Which one? The complicated one that involves supervienience of a species-wide mass delusion of continuity (i.e. #1), or the simple one that feels right (i.e. #2)? I hang my hat on #2, the same one I believe William of Ockham would choose. #1 is messy and inelegant; #2 is nice and orderly. Yes, our minds do create elaborate delusions at times, but aren’t these exceptions to the rule, not to be the go-to explanation for every theory of the mind fork in the road? I’m not saying supervienience (itself a delusion of sorts) is not involved in #2, but it would at least be an illusion based on reality as it is actually perceived to be (i.e. continuity of consciousness).

Well, if you believe #2, as I do, that self awareness sparks on for the first time when the necessary circuitry of the cerebral cortex is in place by the third trimester and global neuronal integration begins (signaled by EEG rhythm) and continues uninterrupted until brain death (or, maybe beyond), then I don’t believe you can accept the premise that consciousness is infinitely replicatable with no qualitative difference. There is no difference between the original and any of the duplicates to any outside observer, but from the point of view of each of them, they are uniquely an “I” and the others are “them”.

Well, if you discount atoms lost though the metabolic waste of the brain (I refuse to tie my consciousness to brain poop, then I think a fairly significant percentage of core atoms does remain in the cerebral cortex throughout life, but I can’t find any reliable cites for this, so I can’t claim validity. In any case, I don’t believe it’s necessary to invoke continuity at this level anyway (yes, we’ve grappled, in lighthearted fashion, over this before, ~4 years ago:). Venturing a guess, I believe self awareness, as a process linked to matter, becomes imprinted on the matrix of the brain, at the cellular level, at the moment of EEG activation, and from that moment on, it is unique, non-dividable and non-replicatable. Even if you could replicate the circuitry and chemo-electric activity exactly, down to the sub-atomic level, only one “current” was turned on and imprinted on those particular cells of that particular brain at that particular time. And, as long as that current isn’t turned off (sleep and general anesthesia don’t count), it will continue on as a unique consciousness that can only exist in one place in time. Now, I do believe that this unique consciousness is transferable to another physical matrix, in theory, be it biological or artificial, but it must be transferred like a glass of water, with direct physical contact and such that only one consciousness remains (pour “you” into a new brain, as your old brain empties of “you”). I also believe “you” may remain intact if each of your neurons were replaced one by one by over time with artificial neurons—as long as the “current” is unbroken during the process.

I think a prime question to ask yourself is: Why exactly are the neurons of the cerebral cortex some of the special minority of cells in higher life forms that endure from birth to death without renewal? It makes sense for some cells with purely mechanical functions to have evolved this way (e.g. tooth enamel), to save biological resources and all, but why CNS cells? You would think these would be among the first cells your body would want renewed on a fairly regular basis. Why select for a mind predisposed to any degree of dulling before the child bearing/rearing years have past? Parents with the sharpest minds should better gather food and evade predators (I’d like mom to have a quick fight or flight response while suckling me and a lion approaches) , thus siring at least as many offspring as mind-aging dullards.

If we can we agree that many, if not most other types of functional cells evolved before CNS neurons, and that the vast majority of those cells regenerate fairly regularly, then we must conclude that evolution appears to have gone out of its way to make CNS neurons different: non-regenerating and long-living. Why? Surely, regenerating neurons could make and maintain long term memories as efficiently as non-regenerating ones. What function of neurons would not benefit from, or at least not be harmed by, regeneration or replacement over time?

I propose that there is permanence in cerebral cortex cells in order to provide physical continuity for the birth, development and life-long physical matter/chemo-electric process hybrid that we call “sense of self”. I believe each person’s sense of self is real and entirely unique in the universe. And, while other parts of your consciousness may be reproducible (i.e. long term memories), your sense of self is not. I’m not certain why this “unique self awareness” is something evolution would select for, perhaps it’s only a tangential association, but that’s a question for another day.

Mmm, on review, maybe I shouldn’t be using the term “determinism” so freely. Strictly, what I mean is “predictability”.

Ah, thanks for the clarification. I’m not actually sure, however, that consciousness is a strongly emergent feature, nor am I sure that such features exist at all; but that’s really a bit of a diversion from the discussion at hand. (I do readily accept the existence of features not predictable from a microscopic description, though – for instance, any chaotic system’s evolution can’t be perfectly computed, yet I would argue that it is still completely determined by the laws describing its components.)

Well, I consider zombies to be as logically impossible as a multicoloured blank cinema screen, but you should also note that “self awareness” is also linked to sensory memory strongly enough to make separating the two impossible IMO.

Precisely! Each one’s claim to be “the real me” is as valid as another’s, since it will feel just as continuous to each of them - they will simply wake up somewhere different to where they remember going to sleep (and I’ve done that plenty of times myself!). You can arbitrarily label the one with the oldest neurons “the one and only you” if you like, but it’s really an arbitrary label, and old Bill Ockham knew all about the perils of attaching significance to arbitrary labels.

Incidentally, I think you’re right about the atoms of the occipital-cortex neurons not turning over, since the carbon 14 atoms would also be replaced. I’m happy to go with Frisen’s suggestion that this exception evolved because the configuration must remain more stable than the configuration of other cells. But in our hypothetical Star Trek transporter, the configuration is recreated exactly anyway. I still don’t see why there would be a qualitative difference in the experience of the duplicate wakers compared to that of the original, nor how such a position is less parsimonious. Remember, Ockham’s Razor is about explanatory entities, not physical entities. I’m proposing a single explanation (“configuration”) which applies to both duplicates and original. You are introducing an explanatory entity (“atoms” + “configuration”) to distinguish the former from the latter.

Oh, and Calculon, it’s good to see an erudite non-physicalist around here. Welcome.

I feel my physicalist position is now free from inconsistencies, although I admit it required biting some pretty big bullets to become so! I’d be honoured to explain to you why I consider my position reasonable, and would ask only that you entertain the possibility that I might convince you of this. That is not to say I want to convince you that physicalism is true (ie. sound), only that it is logically consistent (ie. valid).

I also promise not to inquire as to your own position or shift the burden of proof in a weaselling manner. (I might, however, suggest that one or other criticism of physicalism applies just as easily to other philosophies.) Deal?

Not Night-of-the-Living-Dead zombies, my fine Cardiffian friend, philosophical zombies! They may have no more basis in reality, but perhaps they are a bit more logical—and I hear they are more socially acceptable at dinner parties.

And, as for arbitrarily labeling, I must heartily disagree. On the surface, I admit, my bone of contention, does seem to be nothing more than semantic shenanigans between the generic you and the specific “you”, but I assure you that it’s not. I simply believe that the diametrically opposed answers that we gave with regard to the question I posed 4 years ago, represents 2 philosophy of the mind sub-models, that, though seemingly minor, in actuality have quite divergent and profound implications. At least, I think so.

If I may paraphrase and condense the thought experiment question that was posed, and your answer to it, it was essentially:
*You’re told that an exact duplicate of yourself will be made in 10 minutes and this duplicate will be given $1-million. Only one of you may survive post-duplication and you (the original you being asked the question) must decide which one lives. Assumptions: you like money; you’re not suicidal; you appeal to logic, not emotion. *
(The original question involved black socks on the original and I believe 3 duplicates, #1 receiving the $1-million dollars. SentientMeat chose the duplicate with the money):

(SM, let me know if you believe your answer does not reflect your opinion on the question as rewritten above, I believe they are essentially the same).

I chose the original “me” to live.

The debate that followed involved appeals to occam’s razor; unique and continuous consciousness vs. stop and go consciousness dragging along memories; questions about the permanency of CNS cells…and much else. Things got a bit off track, but it was an interesting 280 post thread, in my opinion.

At least we now both agree that there is some permanence at the atomic level of the CNS (though apparently we don’t agree on its significance with regard to self awareness…yet).

I believe my premise is more logical; you believe yours is more logical. I think both models comply with a materialistic worldview, and either (or neither) may be correct. It’s just an odds thing, really. While proof either way is not at hand, and, both, I believe, qualify as non-falsifiable, all we really can expect to do, through debate, is shift each other’s % conviction of correctness one way or the other. I admit, that I had a significant shift of conviction in the correctness of my model (a hearty 75% in the beginning), as a result of our prior debate. However, through sheer will and logic, you and Mangetout managed to shrink my conviction of correctness down to a paltry 74%…so, there you go, I’m open to new ideas and willing to renegotiate my belief system as needed. :wink:

Anyway, despite the long debate, eventual exasperation on both sides and, ultimately, no conversion of allegiance at hand…I still believe there is something lurking beneath the surface of this thought experiment that has not yet been revealed, is not arbitrary, and is worthwhile to explore. (A 500lb gorilla sitting in your skull). I feel we came close to debating that which needs exploration, but as a quark is to a proton, we just didn’t get down to the right level. I’ll put the blame on myself for being unable to fully express the problem as I perceive it (and I think you may perceive it too, at times, when self-contemplation beholds you). It’s a very fleeting perception that I find quite hard to hold on to, let alone adequately express. It’s a position that needs to be backed in to, one formulated more by exclusion and of paradox avoidance.

So, after a rethinking of approach, I now believe we just need a slightly re-worked thought experiment, one with a third choice, followed by analysis and debate! So as not to hijack this thread any longer, I’ll start a new one…pretty soon, when I get something typed up.

SentientMeat, your participation in the new thread will be, as always, welcome. However, I suspect your time may be as constrained as mine these days, so whatever you can contribute will be appreciated. In fact, I really just plan to post the new thought experiment, followed by a little analysis, then read what choices and arguments others make.

Yes, P-zombies are indeed what I was talking about. I simply cannot see how they see things without experiencing them if all the subsequent indicators (including telling you what they experienced) are identical to mine.

My answer stands, but note that from a practical point of view I would not choose it in the real world since I could not trust that the machine duplicates exactly. For sake of argument, let’s assume I could be convinced of this.

In fact, this does have a bearing on the significance of the longevity of deep CNS neurons. The study’s author himself suggested that this tiny minority of neurons might not be turned over like other cells because their exact configuration might thereby be compromised. (After all, one atom is literally identical to another, but growing a new neuron, which crucially retains the precise connections to some 7000 others at exactly the same ‘strength’ is clearly far more intractable than haphazrdly reparing neuron damage in your finger, say.)

And yes, I agree that both positions are consistent with physicalism. I would only suggest that “configuration” alone is more parsimonious than “configuration + original atoms”, and that all consciousness is “stop and go, dragging memories with it” which just seems continuous. Indeed, I’m not sure there’s much more I could say which I didn’t say in the original thread, but I’m happy to try again.

Again, my position stated as succinctly as possible is this: Every night I die, and someone who thinks they’re me wakes up. The new person’s access to that unique string of memories, as coded in the precise configuration of neural circuitry (which could hypothetically be copied, but evolution has settled on the more straightforward trick of just keeping the original atoms), makes consciousness seem continuous.

Weave away with your new thread, my friend - I’ll input when I can.

A two cents on p-zombies: the argument always seemed stupid and trite to me.

It boils down to this - If we assume that we had something that was exactly materially the same as you and materially functioned the same as you except that it didn’t have consciousness, well that would disprove that the material make-up caused consciousness wouldn’t it? Yes indeedy. Brilliant that. Except for that assumption. If a “zombie” was materially me in every way then it would, for that moment at least, be me in every way, including my consciousness.

Ah, DS, but that would be to assume that the material-you-in-every-way includes your consciousness. P-zombies ask us to consider the logical possibility that consciousness is not included in this way.

Some people say they can conceive of such beings. I cannot. I can only leave such ‘top-down’ philosophers, whose powers of conception bestride mine like colossi, to enjoy the conclusions they derive therefrom. Their conceivable worlds swim with P-zombies reporting their experiences without having them, souls looking down on those left behind, and Beings vying for Supremacy like Somali warlords. Yet their conceptual abilities often suddenly fail at the tasks I find easy, such as imagining biorobots who think they have free will, and we are left looking at one another like a married couple recently separated on grounds of irreconcilable differences.

If I didn’t know better, I’d think I was a P-zombie, and I’m merely telling you what I experience despite nobody really “being home”.

Hang on, what if … :eek:

So assuming there must be a ghost in the machine then there is a ghost in the machine. Logically consistent, as are all identity statements, but stupid. The argument is a trite tautology and always has been and I’ve never gotten why any one takes it seriously.
As to the identical copies with one getting money and the other being destroyed … modify it some. You live in the emerging Star Trek universe. Transporters in truth destroy an object that exists, convert it into energy, and reconstruct an exact copy a split moment later. If you agree to be the first human to participate in that process the exact copy that is made of you (the euphermistically “transported you”) gets whatever passes for a million dollars in that universe. Do you agree to do it?

Essentially the same argument logically.

I’d do it.

Well, again, I think old Davey Chalmers would suggest that he is not assuming a ghost in the machine, just considering whether such ghosts are logically impossible. He would say “well, I can conceive such beings, so they must at least be possible”. I would reply “Kudos to you, O great conceiver. My mental conceiving modules return an ERROR: DOES NOT COMPUTE. I stand in awe of you and your amazing hair.”

And yes, the Star Trek transporter is essentially what TibbyToes and I are talking about. I wouldn’t agree to be the first such human guinea pig in case this (or Evil Kirk, T. Riker or Tuvok-Neelix) popped out the other end, but I would take the $1M if I was convinced that the replica would be exact.

Indeed, if the original wasn’t destroyed, there would be two people who thought they were me, who would be just as adamant that they were the ones who entered the machine (because they remember doing so as plain as day). And they’d both be right.

Whoa Nellie!
If you’re hanging this obvious p-zombie illogicality on me, you’re attributing an equivalency precept to me that I have not made nor believe.
As far as I’m concerned, I have no problem staying in compliance with materialism, as such:
Me = My Copy
p-zombie = p-zombie Copy
Me (as a Non-Zombie) (does not =) Me (as a Zombie) Copy. (BTW, how do you code the slash = in HTML?)

So, in my not-quite-yet-posted new thread on the subject, we need not invoke zombies of any type. We will, however, explore a little more deeply, the question of equivalency of Me = My Copy and its ramifications.

Well, I can sort of see both sides of the issue: for a zombie to give a phenomenological account identical to my own, he must especially be able to give an account of his own self-referentiality (because I sure seem to be able to do that). However, in order to do this, he must have some means of simulating, to himself, the experience of such a self-referentiality – how else would he get something to give a phenomenological account of? But then, what’s the difference between truly being able to examine one’s own cognitive processes, gaining second order beliefs about oneself and the like, and simulating this examination and the second order beliefs? Nobody could really tell one from the other, especially not the zombie – to him, it would look like our own conscious experience does to us. But then, he’d be just as conscious as we are, and therefore, no zombie!

On the other hand, each intelligent behaviour is, in and of itself, nothing but the generation of an output from some input, presumably in an ordinary algorithmic way if we’re not some freaky super-Turing machines or something like that. So, there wouldn’t seem to be a behaviour that, in order to be exhibited, necessitates the consciousness of whatever exhibits it! The same thing goes for any phenomenological account my hypothetical zombie might give of his supposed ‘inner workings’. One could easily imagine a being with ‘hard-wired’ answers to certain questions an interviewer might ask, answers which may well be indistinguishable from answers I myself might give. Even if that list were rather small, it might, by pure chance, happen that the interviewer only asked those questions the zombie has a ready-made answer for, though such a coincidence would be rather staggering for any remotely reasonable length of conversation. It might seem that asking questions that in some way refer to previous points in the conversation (and thereby forcing a sort of self-referentiality) would necessarily trip such a being up, but even there, since the amount of sentences of a given length in English is large, but finite, the zombie might get it right by pure chance (once every few billion lifetimes of the universe, perhaps, but it might), and if it can do that, it could probably severely enhance its chances using some manner of clever heuristic algorithms. Whether or not that could be done well enough to fool anybody, though, I do not know (however, I’ll note that last year’s Turing test star, Elbot, did fool three of his twelve conversation partners into thinking it was human – no small feat, in my eyes!).

The accounts both the simulated self-referential and the hard-wired, heuristic zombie gave might conceivably be indistinguishable to any cursory inspection; however, there would be a subtle difference between them that’s sorta like the difference between random and pseudo-random numbers: for any given (finite) string of numbers, it is impossible to tell whether or not they are genuinely random; in fact, any given finite string can be produced algorithmically. However, with a genuinely random sequence, you will never be able to predict the next digit; any pseudo-random sequence eventually reveals its algorithmic nature.

In a similar vein, it would seem impossible to hard-wire a zombie-like being to account for all conceivable situations in a conversation; however, it seems at least remotely possible that no finite amount of investigation ever uncovers the difference.

My main question in all of this is whether or not intelligence implies consciousness, or if an intelligent, yet unconscious being is imaginable. The argument in the first paragraph almost manages to convince me that it’s not, that, in order to exhibit true intelligence rather than just aping it, a being must be capable of at least some sort of self-referential cognition; however, I really badly want my robot servant, without having to worry whether or not it develops a sense of self and stages an uprising against my oppression, so I guess I kind of have to hold out hope on that front. :stuck_out_tongue:

Indeed, the transporter may substitute for an omnipotent fundamental particle re-assembler, but I refuse to change the thought experiment, on the grounds that it’s not as fun. :smiley:

However, in reference to the transporter:

It may take a few steps and a bit of analysis, but one of the things I’d like to explore in the new thread is the validity of this declaration:

“I’d do it” is not a logical answer, and, as such, should even be stricken as a choice. There are only two logical choices: “No” and “It doesn’t matter to me one way or the other”.

Why not? I repeat, if “I” is a configuration (pardon the grammar!), it’s just shorthand for “this configuration would do it”, this being logically equivalent to (and as logically valid as) “It [transportation] doesn’t matter to me one way or the other”, agreed?

HMHW, yes, realistic statistical arguments are the key to spotting an attempt at a P-zombie me, but they are by definition not good enough to qualify as a true P-zombie me. The true P-zombie me wouldn’t communicate by email or message board, but by conversing with you face to face, with all the memories and face-muscle nuances that you rely on when you’re assessing whether a real person is lying.

Personally, I would ask P-zombie me which Perry Bible Fellowship strips were funniest, and insert ‘fake’ non-funny strips to see if he could tell the difference from those actually written by Gurewitch. He wouldn’t simply guess the same ones as me (which isn’t statistically difficult). He would instantly laugh out loud with a completely natural-looking laugh at the correct ones. I simply cannot conceive of a being which could do this but which couldn’t even really see the strips in the first place, laughing hard at only the funniest (to me) strips despite there being “nobody home”.

I curse you for stealing the last 10 minutes of my life.

Those 3 judges must have been blethering idiots.

No.

I believe there are only two models that don’t violate logic, not three:

  1. The Non-branching Model: Only one “I” may exist. Copies, if possible, must be a unique ‘I’. Both are identical and interchangeable in every respect, except with respect to each other, from each one’s point of view.

  2. The Branching, Intermittent, Dying Chain of Memories Model: You are a “unique collection of memories captured from a unique memory-capturing biorobot”, the memories and supervienience thereof, being strung one after another into the future. “You” essentially die every moment, having no real vested interest in the next incarnation of you.

  3. The Branching, Intermittent, Living Chain of Memories Model: You are a “unique collection of memories captured from a unique memory-capturing biorobot”, the memories and supervienience thereof, being strung one after another into the future. “You” persist from one moment to the next and have a vested interest in the next incarnation of you. This may, of course, be the exact same MOA involved with the first model, except for the fact that #1 may not branch, while this one can.

Apologies for the tortuous names I gave to these models…off the top of my head, I couldn’t think of any more concise way to accurately describe them…

I believe #1 and #2 are the only the only models that don’t violate logic. #1 needs a little perseverance to shoehorn into compliance with materialism, but, in the end, I think it fits nicely. I believe #3 allows for a paradox that can’t be resolved (yes, I recall not being able to convince you of this before, but I thought of better ways to express it…and should be able to convince you within a few dozen posts into the new thread :eek:. I believe #1 is the correct model (perhaps with the MOA of #3); and I believe you are really in the #2 camp (the “when I wake up, I’m a new me” camp), not #3, as you may temporarily think you are. :wink:

…Ipso facto, if I’m correct and you are a #2 believer, and #3 isn’t allowable, then the logical answer for you would be, “It doesn’t matter to me one way or the other”, because “you” have no vested interest in the “you” who transports or the one who doesn’t—“you” for all intents and purposes, at any point in the future, are dead either way.

Well, the problem is telling the two cases apart, isn’t it? If the one ‘guessing’ at the correct jokes (perhaps in a similar way to Amazon guessing at what kind of music I might like – though, hopefully, a few generations more advanced than that!) is just as convincing as the one who honestly finds them funny (a mere matter of engineering, it would seem), then you wouldn’t really gain any information out of the whole process.

Incidentally, if you don’t mind, I’d very much like to hear your thoughts on whether or not intelligence is possible without consciousness; I seem to be unable to settle the manner in any sort of satisfying way on my own.

Yeah, he’s not exactly convincing, is he? However, in a situation where one didn’t know that he’s artificial, things might look different – one might attribute to eccentricity and idiosyncrasy what otherwise is a telltale sign of a lack of true understanding. I’ve actually wondered how psychological factors influence the appraisal of the testers in a Turing test situation – I could easily see people being reluctant of branding their opponent a machine, for fear of insulting another human being; the other way around, there’s no such bias, since we typically don’t worry about insulting machines.