is materialism incompatible with "me"?

Sometimes it helps to see things in graphic form. Here are the 3 relevant (and two silly) models as I envision them, with a little analysis included. I believe #1 is correct, but I don’t believe I can make a valid case for it unless we can exclude #2 or #3 first. So, for those of you who don’t believe #1 is correct, please choose between #2 or #3 (I won’t accept #4 or #5). Or, if you don’t believe the model you accept is listed below, and another is needed, please describe it, and illustrate it if you can.

Key for Below:
VI = Vested Interest
Series of Red Letters = Original progressing forward in time from right to left.
Black Letters = The Duplicate progressing forward in time from the point of bifurcation

#1: Non-Branching/Single Pathway of VI Model: You have a VI only in the original + time. Although no measurable difference exists between the original and the copy, to an outside observer (thereby in compliance with materialism, I believe), from each of their point of view, a difference always exists. Assume the original and duplicate face each other at the instant of duplication: at no time will either perceive the other as himself. The counter argument to this model seems to be that it is less parsimonious than #2 (or #3), and that for a difference to exist between original +time and copy, some type of fundamental particle permanence must exist in the brain for this to even be possible, and still comply with physicalism. I think this needs further exploration. Those believing this model should answer, “1. You, T-10” to the thought experiment.

a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
……………………………b
………………………………b
…………………………………b b b b b b b b b b b b b b b b
#2: Branching/No Pathway of VI Model: You have no actual VI in the original or any copy as time progresses. The illusion is a delusion, existence, ultimately, is futile, at least going forward in time. Original has no VI in the original +time, or the copy, but, interestingly, turn the arrow of time around and note that the original + time and the copy both have a VI in the original. Those believing this model should answer, “3. It doesn’t matter to me one way or the other” to the thought experiment.

a b c d e f g h I j k l m n o p q r s t u v w x y z…
.………………………a
.…………………………b
.…………………………… c d e f g h I j k l m n o p …

#3: Branching/Infinite Pathways of VI Model: You have a VI in the original + time and the copy, and they conversely, have a VI in the original. Those believing this model should answer,” 2. Copy”, to the thought experiment.

a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
……………………………a
………………………………a
…………………………………a a a a a a a a a a a a a a a a a

#4: Branching/Single Pathway of VI Model: You have a VI in your copy, but not in the original. This is obviously absurd and we can dismiss it, out of hand. Those believing this model should take off their tin foil hat and rest awhile.

a a a a a a a a a a b b b b b b b b b b b b b b b b b b
……………………………a
………………………………a
…………………………………a a a a a a a a a a a a a a a a a

#5: Solopsistic: Just for the sake of completeness!:smiley: Those believing this model don’t exist.

b c d e f g h I j l m n o p q r s t u v w x y z
……………………………a………………………………
b c d e f g h I j l m n o p q r s t u v w x y z

…Not sure if I can reply for awhile, so carry on…

I’m straining to grasp the mechanics of this question, let alone the relevancy. This debate is about consciousness, something zygotes don’t have…and won’t until they have obviously split into two distinct individuals, each with their own memories. Who gets the money? Who am I killing…and for what reason? Who’s going to murder my twin, me? Under no circumstance, would I choose to kill myself, if that’s what you’re asking: not in the experiment I described, not in yours…and I won’t go through the transporter even if a $1-million dollars was on the other side.
Why? Because I want to live. Call me selfish.

Now I’ve really got to go. My twin is visiting, and he owes me a lot of money.

The premises are in conflict - to the degree I am logical, I do not value actual continuity of existence over percieved continuity of existence. It’s only my primitive animal brain, which can’t comprehend that T10 and Copy will be equally me (since this didn’t happen much to my mamalian ancestors) which clings to the idea that physical continuity matters.

So, which am I? Logical? Or devoted to a kind of self-preservation which requires physical continuity?
My logical mind is a materialist - the copy is me. My animal brain on the other hand gets confused easily, and both rejects and accepts the idea of discontinutous existence without being able to articulate why or defend its opinions.

OK, let’s try this approach, then. You say that you doubt “You, T-10” or “Copy” would lay their lives down for each other? Why not? Because, they know that they are not the same person, right? They are not the same person because every, what? second? nano-second? Planck time? they have diverged from one another, and, as such have no vested interest in each other. If they did have a vested interest in each other, then they would lay down their lives for each other. Or, at least the poor one would lay down his life for the rich one, if one of them had to die. Right?

So, why is their relationship so different from the one they have with “You, T-0”? Haven’t they all really diverged from each other equally. This should become easier to envision if you take away the time factor and just look at the graphic relationship they have with one another. It should look something like this:


You,T-10                         You,T-O    
        .                              .
           .                        .
             .                   .
                .             .
                  .        .
                      .
                      .
                      .
                      .
                      .
                      .
                  Copy

If this is correct then “You,T-10’s” relationship with “Copy” should be the same as his relationship with “You,T-O” , which should be the same as “Copy’s” relationship with “You, T-0”. Either they all have a vested interest in each other, or none of them have vested interests in each other.
Let’s call this the “What’s good for the goose is good for the gander”…or, “you can’t have your cake and eat it, too” model.

Now, I’ve really got to go, I feel a “splitting” headache coming on. :slight_smile:

I’ll throw out 1 right off. Selfs are not static entities going aaaaa… into a future. Every “I at t=n” is a different self. Yes, those selfs have continuity and shared history, and therefore identify as the same thing, but they are not. There is vested interest in providing for a future “I at t=n” that “I at t=0” believe I can identify as “me” - which generally requires that shared history and continuity. So 2 is tossed too. So since “I at t=0” have both those things with both versions of me at t=n then I have a vested interest in both. I do not want either future version of me to die. You create a Sophie’s choice. Still Sophie had to choose, didn’t she? I am vested in both. One gets money but one has physical continuity and emotionally, if not logically, that counts for something.

So the question is how much is physical continuity worth to that identification process. Logically little so “I at t=0” should choose the rich me to survive (I continue to object to calling it “a copy” as needlessly biased; it is a new me, not a copy of me.) But it is hard not to attach some value emotionally to physical continuity. As begbert puts it: I cannot help but respond some with that “animal brain” honed by eons of evolution.

I would likely attempt to refuse to play because once two different "I"s exist one of me is going to have to die. One will be aware of his life being about to end. Forcing one of me to live through that anticipation of the end would not be worth a million for my other self.

Yet I would travel via transporter.

The difference is emotional and is created by the fact that both versions exist independently for some defined period of time.

The zygote example is meant to illustrate that different exact copies at t=0 are different individuals after that. Sure the twins separated at birth have more time to have gone down their different branches and to individuate but that process begins within the very first microsecond. It is just a matter of degree.

Which brings us to your final post and here I would point out that vested interest is not all or none. Last week’s Science (discussing altruism) quoted J. B. S. Haldan as putting it like this: “Would I lay down my life to save my brother? No, but I would to save two brothers or eight cousins.” So these future "I"s have vested interest in each other but the very microsecond they begin to diverge that vested interest becomes less than 100%. How much less? A million dollars less? That is more of an emotional question than one of logic.

Model #1 is like a comfortable suit that may not be the first you try on at the haberdashery, but one you keep gazing at, longingly, out of the corner of your eye. You’re tempted by the cool sharkskin suit, till you put it on and realize the cloth has holes in it. You try your luck with the jazzy zoot suit, till you realize it has too much fabric and is complicated to the point of being absurd. Finally, you put on that comfortable leisure suit you originally felt too hip to consider and realize its simple lines are appealing, it drapes well on your frame…and it doesn’t make you look fat. Model #1 is not one you jump right into; you buy into it by default, after the other models lose their luster of credibility.

More to follow…

I should probably note that I’m not a great fan of qualia.

But it would seem to me that any less than Turing complete intelligence would violate that definition (of being able to solve novel problems – was that your intended definition?), being essentially equivalent to some sort of fixed-program computer. Anything that can truly solve novel problems can solve the problem of emulating a Turing machine on paper (perhaps in some form of von Neumann architecture), it seems to me, and since this only amounts to a ‘memory upgrade’, I have trouble seeing on how this should truly enhance its computational capacities.

I’m not really sure I could grant you the premise here (if I did, though, I’d probably agree that there’s no necessary consciousness emerging). You might say that human society has some capacity of solving novel problems, but I’m not sure how much meaning this has – for instance, consider the water in a puddle. The shape of its volume perfectly fits the shape of the puddle. If I now pump that water into a bucket, it solves the problem of shaping itself into a bucket-fitting shape instantly. I think it’s possible that the problems society solves (and also, the problems ant colonies solve) are rather more of this kind than a ‘truly novel’ kind. And yes, it is easy here to say that the only problem that is solved by the volume of water is to shape itself according to some shape to fill, but that’s just because we have an easily-attainable high level description of water, which we typically lack for human society; if we had to approach water at a molecular level, things might appear far less clear-cut.

I am confused by your response HMHW. Solving novel problems does not mean the ability to solve any problem. My definition explicitly allows for very narrow domains of intelligence and very foreign ones. I doubt human intelligence qualifies as a universal Turing machine either.

As to my premise about the intelligence of human society …

You accept, I presume, that an individual human has some form of intelligence, yes? And that several humans of different talents and knowledge bases working together in a coordinated way can solve problems, novel problems, that none of them working alone could solve? It therefore follows that even at the level of the small contemporaneous group the intelligence of the group is greater than the parts.

Do you think that any human in isolation, without the benefit of having been part of a society solving problems for many thousands of years, and without the benefit of other humans to communicate with, would be capable of creating that which take for granted are the products of the human mind? Einstein could not have done what he had done outside of the context of all that came before and of community of scientists and others.

Human intelligence, even the most “gifted”, is fairly paltry. It even seems likely to me that other species have individual minds of greater problem solving capacity than are ours (albeit in different domains). But paltry individual human minds are units of a much more powerful intelligence unit - interacting human societies and cultures that spread across the years and across billions of minds, an intelligence that is much greater than the mere sum of its parts. It functions on a different time scale and its great intelligence does not mandate that it have its own consciousness or free will. But if it is the “puddle” then each of us are the molecules of water.

I’m not sure if this will add anything for either of you, Half Man Half Wit and DSeid, but Conway’s Game Of Life has been proven Turing Complete.

Clearly, one cannot get much less complex than the individual pixels involved in the game; therefore, it should not be surprising that a sufficiently complex aggregate of individuals (e.g., ants) may also be Turing complete.

Great googly-moogly, though, the idea of discovering (or even expressing) the rule set involved is mind-boggling.

Ok, I’ve just got time for a few words right now, I’ll try to elaborate more next time…

The main reason for my drawing attention to and asking you (generic you) to choose between models #2 and #3 (as described in post #81) is because I believe those two are too easily conflated, and have been at some level, during our discussions—sort of like being wave-like some times and particle-like other times. Pitting model #1 against an elusive model #3/#2 hybrid is an act of futility…or, at least difficult to do. Eliminating one of those puts constraints on the other, making for an easier comparison to #1.

Apparently, the consensus of our short list of responders so far is for model #3 (i.e. choosing “Copy” to live in the thought experiment on post #78) and rejecting model #2 (i.e. “it doesn’t matter to me one way or the other"). Therefore, we may conclude that model #3 adherents believe a real vested interest exists between “You, T-0” and “You, t-10”; and between “You, T-0” and “Copy”.

Model #1 vs. Model #3

The first step is to determine whether or not both models are at least theoretically possible, and in compliance with materialism (which we will assume is the correct philosophy). If one is not possible, the other wins by default. If neither is possible, we need to enlist an alternate model. If both are possible, only then must we proceed to the next step, Occam’s razor. I believe #1 wins on both accounts, being possible and more parsimonious, therefore more probable.
I believe #3 is not possible, because there’s a built-in paradox in the system (ultimately the result of conflating #2 and #3). If I fail to persuade you of that, then, we may need to get back in the ring and whip out our razors.

A Case for Model #1:
Let’s describe #1 in a little more detail, then figure out whether it’s theoretically possible. If it isn’t, #3 wins by default. This model says that there is a difference between “You, T-10” and “Copy” and it is with regard to their respective vested interest relationship with “You, T-0”. As far as this relationship is concerned (and only this relationship…we realize all three diverge from each other over time with regard to accumulated memories): “You, T-0” = “You, T-10” but, “You, T-0” ≠ “Copy” and therefore “You, T-10 ≠ “Copy”

There’s only one allowable VI pathway in the original you in this model (from the original you’s perspective) and that resides in your original brain forward and backward in time. If a copy of you is possible at all, then it must have its own unique VI in its own brain from the time of its creation on.

Here is my graphic interpretation of successive instances of accumulating memories in your brain with underlying supervienience (an overlying aura would look better, but I don’t know how to over-score characters):


 a b c d e f g h I j k l m n o p q r s t u v w x y z

The single, unbroken line indicates that, even though the supervening “aura” of awareness created by your brain is merely an illusion, the perceived feeling of continuity is real (i.e. there is a real VI continuum into the future). However, the trade off is that it can’t split. If the real VI line exists, but can’t split, where does it go if it has only one place to go?: into a potential copy of itself, or does it remain in the original brain? (hint: it rhymes with drain).

Simple enough, but is this possible? How can there be any difference between any two minds, if their fundamental particle configuration and arrangement is exactly the same, particularly at the point before divergence commences? This seems to violate a major tenet of materialism.

Our physicalist* friends say this is only possible if there is some type of physical permanence that exists in the brain, for the duration of conscious-life, which can somehow bind self awareness to this unique physical continuum (do I have this correct, physicalists?)…converting, I suppose, a non-local event into a local event.

*( I don’t wish to separate myself from physicalists, I’m probably one myself, I just haven’t studied the manual and taken the membership exam, yet).

Is there any type of physical continuum present in the brain that may serve as a candidate for this unique VI pathway existence? Yes indeed, we’ve linked up-thread to cites noting that CNS neurons don’t regenerate or replace (cellular level physical continuum) and that particular occipital-cortex neurons atoms don’t turn over (atomic level continuum). So, continuity exists in the very place we need it to exist (the brain), for as long as we need it to exist (lifespan of active mind), to make consciousness a local event. And, to top it off, with regard to biology, this type of life-long continuity is rare. Coincidence?

(SentientMeat: “I’m happy to go with Frisen’s suggestion that this exception evolved because the configuration must remain more stable than the configuration of other cells” Yes, but for what reason do they need to remain more stable?)

Why would evolution select for non-regenerating neurons if the only result is consciousness? Maybe self awareness makes you horny? Perhaps it’s a chicken and the egg kind of thing wherein the neurons evolved that way for some unrelated reason, but consciousness didn’t develop until they got to that point.
Personally, I think consciousness as a local event is most likely due to either the aforementioned cellular or atomic continuum, but I think there is a third possibility: Since the original spark of conscious “current” begins (in the third trimester) from neural circuitry containing specific sub-atomic particles in space-time, this alone may provide and serve as an “id branding” template, making the resultant current a unique entity. And, as long as the current (the process of self awareness) remains intact and viable from that point on, there’s no reason it can’t remain a local event over time, even if physical continuity breaks down (this has interesting implications).

A Case Against #3
In this model you can imagine each successive instant as a freeze-frame particle configuration and arrangement referencing an accumulation of memories mapped from the previous instant. Adding its own memory configuration in real time, this is then referenced and changed in the next instant, on through time (is this a valid materialistic interpretation?) But, the way I see it, in the reality of this model, nothing is really being passed on as a continuum, memory wise, (like a wave, for instance) from one instance to the next, forward or backward in time. Each instant is more like a self-contained, discreet particle, or capsule of memories as represented by a particular configuration and arrangement of fundamental particles. One instant does not really need the one before or the one after in order to exist as a memory bank capsule, does it? Each freeze-frame instant of particle configuration and arrangement corresponding to a particular bank of memories may exist as a self sufficient entity, with no dependency or interest in any other particle configuration and arrangement. This is the main reason, in the # 2, #3…and maybe even #1 model world view, that memories may theoretically, at least, be able to split off, or be duplicated into multiple Copies. If this was a real hard-wired continuum, it couldn’t really branch, could it?

So far so good. With respect to memories, all three models may (or may not) follow the same mechanics. If this was all there was to it, we could have any number of copies made…of course, they would all be zombies! (Alright, I said I wasn’t going to bring zombies up again, but you needed a little scare to keep you from nodding off).

But, there is something else going on, another layer either acting with, or as one, with these memory frames. This is the supervenience trick of your brain making it appear to come “alive”, to be self aware. This is really the nuts and bolts of your consciousness; it’s not the memories per say, but the illusion your brain creates referencing these memories. This is what we really need to analyze in more detail. I don’t believe anyone can rightfully claim to know exactly how the process works; does it work as one with memories, or is it simply very closely associated with them. If it works as one with the memories, then it should follow the same rules. Here is my interpretation and representation of the memory/supervenience interface, over time, for Model #3:


  a b c d e f g h I j k l m n o p q r s t u v w x y z
------------------ \I j k l m n o p q r s t u v w x y z

So we have a series of instances of particle configurations and arrangements referencing memories, each being self-contained and independent. Reassemble this exact configuration and arrangement of memory mapped particles anywhere in the Universe, and they will be the same. It’s a non-local event. If this alone was consciousness, we must conclude that one memory instant has no vested interest in any other one, correct?

But, the supervening aura attached to these discrete packets of memories feels continuous. I feel like I have a real vested interest in the future me and that the past instances of me had an invested interest in me. Can this be real, or, is it merely a false delusion created by your benevolent mind in order to keep you sane?

In this model, memories and the supervening “aura” of self awareness may be thought of as the same thing, correct? So, for the sake of simplicity let’s do away with the time factor and also say there are only 10 particles configured and arranged in a particular way corresponding to the sum total of my consciousness at a particular point in time. This little freeze frame of consciousness should be able to be duplicated anywhere. It need not reference any other set of particles to exist. It’s independent and has no real VI. If you could experience the supervening “aura” of self awareness at this instant, what would it feel like? It would feel like you had a future and it would feel like you had a past. That is the false delusion your brain worked into the machine, right? But, does this instant of self awareness have a real future or a real past? No, there was nothing there before or after. Nothing at all. Memories = awareness=No continuity=No real VI

So, who lives, you or your rich copy? The logical answer is, “it doesn’t matter to me one way or the other”…But, that’s a model #2 answer, the one you threw out.

And, here is where I think the conflating #2 and #3 problem comes in. The real graphic representative of model #3 looks like this:


 a b c d e f g h I j k l m n o p q r s t u v w x y z
------------------\I j k l m n o p q r s t u v w x y z

But, this isn’t logical, because it implies a branching, or splitting, of a hard wired local event. This would involve literally being in two places at one time, in my opinion, and therefore, is a paradox.

I must confess, the real reason for my posting this is that I’m concerned that some of you model #2 or #3 believers may make a very regrettable choice one day, and I’d like to talk you out of it. When you reach the logical conclusion that extrapolating your model further means that your Copy is essentially no different than, well, anybody…or anything else that is not you, you may get confused and take a gamble that you shouldn’t: if someone offers to give, say, a duck $1-million dollars if you shoot yourself in the head, please think twice before planning an expensive vacation in some exotic pond).:smiley:

(Heed this advice: Even if it walks like you and quacks like you, it’s a duck).

Summary:
Model #1 Assumes only 1 vested interest pathway is allowed per individual and it is a real pathway, not imaginary. It may exist as a local event expressed over time because it has physical continuity of some sort. If copies of the individual are possible at all, then they must have a unique VI pathway that begins at the time of their creation and progresses forward in time. VI = Real Local Event

Model #3: Assumes multiple vested interest pathways are allowed per individual, but they cannot be real pathways, only the illusion of continuity exists. As such this is nonlocal and no physical continuity is needed. Each instant a “new” you is created and the “old” you no longer exists. You and any copies of you have no real future. In the next instant, you and your copies believe you had a future in them and that they have a future in the next generation, but they don’t. VI = Illusionary Non Local Event

Model #2: Assumes multiple VI pathways are allowed per individual and they are real pathways, not imaginary. As a non local event, no physical continuity is needed. Action at a distance is allowed and may be expressed as a neural network of transcendent consciousness–some bastardized version of the One Mind Theory, perhaps.

Bonus: Something else to consider for those of you who believe you have a future in your copy. What happens if your copy is made from a map of your mind the way it was 10 minutes ago? Can you still have a future (a real vested interest) in something that was only briefly you in the past?

Conclusion:
You may choose the universe where you can look forward to vacationing in St. Croix next month, firm in the believe that it will really be you drinking mai tai’s on the beach; or the universe where it’s logical to shoot yourself in the head if you see a rich duck waddle by; or the universe that allows, say, an infinite number of SentientMeat-heads to array themselves into a neural network of shared consciousnesses, with the ability to perceive your wife behind closed doors as she undresses. :slight_smile:

Now, which model did you choose, again?

Sorry Tibbytoes but you really seem to want to insist that the rules are as you set them and you set them by some pretty invalid assumptions. Your thought experiment so go right ahead but with those as the guidelines, and with your apparent lack of interest in understanding what others are trying to express, I will need to opt out from further participation with this part of the thread. Take care.

Dig, thanks for the links but honestly I think the Turing complete issue is a bit of a distraction. Perhaps though you’d be interested in commenting in the thread I have created regarding definitions and how we seem to talking across each other in some of these discussions because we use these words to mean different things.

It seemed like a good way to regress from said sidetrack. I originally wrote it just to Half Man Half Wit, but didn’t want to presume that you were familiar with it.

It’s a bit difficult to see how we could have the concept of Turing completeness and know how to implement them if our minds themselves weren’t Turing complete – as I said, I could certainly just grab a pencil and a paper and emulate any Turing machine, if perhaps rather slowly and clumsily. Of course, we’d only be finite automata, and thus eventually run into memory problems (at the latest, once we’ve exhausted the computational capacity of the observable universe), but that goes for pretty much every real implementation of computation (Omega Point hypotheses notwithstanding). I’m pretty sure, in other words, at least in principle, we can compute everything computable; and if you don’t buy into some exotic hypercomputing notions, that’s the limit for intelligence, at least as far as power goes. Other implementations may be faster, and may have more memory, but there’s nothing that they could compute that – in principle! – we couldn’t.

Yes, the individual units from which to build a Turing machine surely don’t need to be very complicated things – neither neurons nor transistors are, either, at least from a functional viewpoint. But, with either of these, or the cells of Life or Rule 110, you need to get them into the right patterns to get something useful out of them, which are, as you say (and as your link shows), usually quite complex; it would take some convincing to make me believe that ant colonies generally exhibit such structures.

And DSeid, I’m sorry if I don’t address your arguments re the intelligence of human society in greater detail, suffice it just to say that I don’t consider mere knowledge creation (and a greater ability for it) to be all that indicative of intelligence – if Einstein had lived 200 years, he probably would have created a lot more knowledge than he has during his actual lifespan; would 200-year Einstein therefore be more intelligent than regular Einstein? And even the ideas of a group are still ideas of individual minds, if perhaps prompted by stimulation the individual minds would not otherwise have received. But such stimulation can also come from books, and I wouldn’t say that I + a book on quantum mechanics am truly more intelligent than I otherwise am.

Well it is kind of hard for us to proceed without even being able to agree on what intelligence means.

It seems clear to me that you without a set of knowledge to refer to and to learn from, a tool kit of sorts, is less intelligent than you with a knowledge set, a tool kit. So to me, yes, you plus a book on quantum mechanics (written in a way that you were able to learn and apply the material) was more intelligent in the domain of solving problems regarding quantum mechanics than you without the book. How you can argue that such is not the case is beyond me.

Whatever.

I choose #2: Copy to live and receive $1M.

This is because I believe that my memories are what “I” am, and that the actual individual atoms which encode those memories do not also need to be “saved” in order for “me” to step out of the machine. Indeed, I am a Copy of “myself” 10 minutes ago, who thinks I’m still me because I have access to (largely) the same memories.

Part of the “personality” of this system called “me” is self-preservation. For some reason (which I can’t/won’t justify here), I desire these copies to continue to be made for as long as possible, for as soon as the copying process stops and the last copy begins its thermodynamically inevitable descent into disorder and noise, “I” die. One can say that “I” have a vested interest in beings which share my memories (memes) continuing to respire. (This is actually similar to deliberately having children who share your genes.)

Of course, having more than one such being running around might be socially inconvenient, so it’s good that the game allows only one to live. In which case, “I” choose the “me” with the extra cash, thanks. (Not that money makes much difference to me one way or the other, but there we are.)

As for the ‘branching’ graphical models:
[ul]I subscribe to the #3: infinite branching model in which “I” have a vested interest in all my future meme-children.
[li]I consider #1 and #4, in which a vested interest is only held in one arbitrary branch, to be logically equivalent. [/li][li]I’m not sure I really understand #2, but it seems that it means not even having a vested interest in myself, right now, which might violate the other rules regarding suicidal tendencies.[/ul](nice coding by the way, TibbyToes!)[/li]

I reject this. “Vested Interest” is just as much a characteristic of the system as memories, so “I” have a vested interest in any idenitical system. The individual atoms don’t matter. The Vested Interest can split, just as “I” can split into two future “me’s”.

You see, what we’re doing here is copying a whole person, along with all their personality traits and inclinations, including not wanting to die. That’s why it’s crucial that the copying procedure is presented to “me” as a straightforward procedure which a single “I” emerges from unscathed. If we wait until after the procedure to terminate one person, that person will justifiably go berserk and scream blue murder. Because that is what I would do!

No, you don’t. Physicalism is the philosophy that only physical configurations and processes are necessary to explain the universe and everything in it. It doesn’t matter precisely which individual atoms are involved in those processes, since atoms have identical physical properties. That evolution does favour holding onto individual atoms in deep CNS neurons (perhaps because precise configuration is so critical compared to other neurons and cells) is merely an interesting fact.

Because the precise strengths of their connections to on average 7000 other neurons are how memories are encoded. It might well be that human memory just cannot work if neurons regenerate since the precise configuration is not copied in the new neuron. The machine copies precisely, so this mundane engineeing difficulty is bypassed.

Well, you might have stumbled upon a reason why the machine is impossible to build, and I’d agree with you on that. However, let’s assume that in order to work the machine exploits some (ahem) Aspect of non-locality. (Note that nonlocal action is not as impossible as you think, and no, this isn’t woo-woo telepathy I’m talking about but well-known quantum mechanics experiments). I don’t see why one person with a vested interested becoming two people with a vested interest, one of whom is killed, is logically impossible.

Yes: me with a 10 minute memory lapse. Again, I’ve had my share of them already.

My summary: physical continuity is irrelevant – merely a practical shortcut for evolution to keep the configuration stable. Our machine replicates this exactly anyway. Only the illusion of continuity exists in all the entities at 12:10. I’m not sure where the “shared consciousness” strawman came from.

Well, I’m sorry I can’t just up and agree with you, but it’s just honestly not how I see things (and I’d be surprised if what you propose is a very widespread view, not that that implies anything; it’s just not something I’ve come across before). The ability to apply what’s written in the book, to learn the concepts, integrate them into my thinking, understand and employ the necessary abstractions, that’s where intelligence comes in. In the extreme case, a book with answers to an intelligence test would surely make me perform much better at an intelligence test; but I wouldn’t think that it actually made me any smarter.

Actually, HMHW and DSeid, the “you + book” scenario does shed a little light on my take on consciousness/intelligence.

I said earlier that:

The trouble with old Fritz is that he is very limited in what he ‘senses’. He has truly vast memory banks and processing power, but his ‘moment’ is frozen whenever his opponent stops the time clock. Fritz ‘lives’ only in the most recent move - one could almost say that the opponent stops Fritz’sclock as well, freezing him into a specific moment from which Fritz laboriously calculates his next.

This is why Fritz’s consciousness might be gravely diminished compared to even Cambrian trilobites, who must ‘live in the moment’ to a far greater degree. The illusion of time passing may be a crucial illusion for beings who have to anticipate where the next threat is coming from, or which way the threatened food will jump. “You” have access to an even greater timespan, accessing memories from years ago and calculating threats years into the future. But it is this unified ‘flow’ (illusory or not) which I would say defines my feeling of consciousness, and all the databases or knowledge in all the books in the world will in no way ‘elevate’ my consciousness one iota unless it is somehow brought into that millisecond-by-milisecond ‘flow’, in which connections to memories I have already made are formed.

Ultimately, yes, it’s all just information and signal processing. But maybe “living in the real world” engenders very specific coding which simply cannot be replicated by a database. I would be unsuprised (though slightly disappointed) to find out that squishy biological neurons are somehow necessary for consciousness in some purely mundane engineering sense.

As for libraries, ant colonies, the population of China or other networks beig conscious, a key aspect of consciousness to me is various levels of memory, with distinct cognitive architecture for each, which I believe contribute greatly to this “unified flow of time” I mentioned above. Just a load of sensory input and a load of memory isn’t enough IMO. That input must be sorted into different levels of memory, and later accessed in a way which ‘cross-references’ with other memories (like how neurons do it) in order for a ‘personal’ consciousness to arise.