If we consider consciousness as an emergent property of the cortical brain that evolved a while back on the biological tree of life, toward the trunk, then I believe self-awareness/personal identity (PI) evolved as a higher order mental process on top of the lower order consciousness more recently on that tree…pretty close to the fruit. The octopus was the first creature to evolve a personal identity. I have no scientific data to back up that claim, but octopuses seem pretty self-assured; oysters?..not so much. But, I digress.
Self-awareness is awareness of awareness. It has a supervienient relationship to consciousness, like a cherry atop a hot fudge sundae, whereby the cherry is the “YOU” who blushes when you fart accidentally in front of Queen Elizabeth II, and the hot fudge sundae is the zombie “you” who could not care less what you do in front of anyone. Remove your cherry and you’re a real boob.
Furthermore, I propose that lower order “consciousness” is nonlocal, while higher order PI is local. By this, I mean, while there may be multiple, identical “you’s” spread throughout the universe, each indistinguishable from the other “you’s”, in the eyes of any non-”you” observer; there may be only one “YOU”, and only “YOU” know that “YOU” are unique. Of course, all the other “you’s” can make that same claim…so, “YOU’re” not all that special—get over “YOUR”self, boob.
YOU are unique in the universe? How can this be? It flies in the face of the materialist mindset. Let’s just remember that materialists are generally hypersensitive panty-wipes that suckle their mother’s teets longer than is psychologically healthy. Let’s instead proceed down the road of common sense. There is only one you; there can be only one you; you are unique. You may be an asshole, but you’re a special asshole.
Let’s illustrate this with computers. I’m not a computer geek, so bear with me. Let’s say Dell achieves strong AI with it’s AI Inspiron line—not just computers with consciousness, but self-aware to boot (heh). Line up 5 laptops in a row: 2 identical Inspiron AI-1000s, 1 economy Inspiron AI-100, 1 gamer Alienware AI-2000 and 1 Inspiron AI-1000 that was momentarily dropped in the toilet (we’ve all done this with our i-phones, yes?). All 5 computers are loaded with the same OS (Windows 33.1) and software (Adobe Human 1.0) and share the same input devices (video-cam, microphone, touch pad, Taste-eBuds, Smell-eReceptors, etc.). Turn 4 of them on, except for the second Inspiron AI-1000, and let life unfold before them for 10 years. At year 10, clone one of the active AI hard-drives with Norton Ghost onto the dormant AI-1000 drive, then boot it up too. Then wait another 10 years. Let’s call them Timmy.
I would expect 2 normal Timmies (one just a little late to the party), one mentally challenged Timmy, one gifted Timmy and one insane Timmy. But, for all intents and purposes, they would all be valid Timmies.
Scenario 1: At year 20, a beautiful, buxom, scantily-clad lab assistant asks the laptops, “I have to destroy all but one of you, do you care which one I choose to live?”
Scenario 2: At year 20, a beautiful, buxom, scantily-clad lab assistant dis-attaches each laptop from it’s shared input devices, then attaches separate input devices to each, in close proximity to their chassis. She does this while they are in hibernation mode. She then asks each laptop, “I have to destroy all but one of you, do you care which one I choose to live?”
I have no point to this thought experiment, I just thought it would be fun to think about…
…Alright, I do have a point: In scenario 1, does the question even make sense to the laptops with shared input devices? With identical memories and shared perceptions, can 1 laptop identify itself from the others? My guess is that they would all say, “it doesn’t matter.” In scenario 2, with new-found unique and separate input devices, the game changes. My guess is that each one of them would now answer, “choose me.”
Different answers from the same group of consciousnesses, the only difference being that in scenario 2, they were given separate input devices at the last moment.
Now, after the fact, ask both remaining laptops in each scenario if the beautiful, buxom, scantily-clad lab assistant chose the right one to keep alive. They will both, certainly, answer, “yes, I’m glad she chose me.”…(unless, possibly, they are hetero-females, or gay males).
How can this jibe with the materialist/physicalist view of the universe? Two or more groups of identical elemental particles in identical arrays should produce identical results, correct? From toasters to human beings, replicate the the elemental particles exactly, and the replicants are indistinguishable to any outside observer in any way. This must apply to self-awareness too, right? Yes. But are they really the same? Are they the same to the individuals involved?
No. I don’t think so. You can’t literally be in two places at the same time.
Observe two functional brains before you. They are identical in every physical way. To you they are the same and it doesn’t matter which one you choose to be your fishing buddy or your wife…or both. But, to each of them they are different. How?
They are identical, but they are not the same. This one <look to your left> is over here; that one <look to your right> is over there. They occupy separate coordinates in space. They are not exactly the same. I believe lower order consciousness is nonlocal and is inherent to the software, no matter how many copies are made or where it is booted up. I believe higher order self-awareness is unique and is tied to one and only one continuous current of electro-chemical current. It’s born when booted up for the first time. It sleeps in hibernation mode, but remains intact. Changes and degeneration can occur over time, but as long as the original current continues unabated, the self remains intact. It dies when re-booted and a remarkably accurate impostor is born to take it’s place.
Far be it for me to tell you how to live your lives, but, heed my warning: don’t get in a Star Trek Transporter, it will kill you.