Ok, I browsed through the thread now. Didn’t read quite all of it, but I think I have a decent overview of the discussion.
First of all Tibby, don’t feel bad for people acting the way they did in the physics forum. I don’t frequent there because it’s infested by trolls and morons 
Yes, my reasons are completely different. But before we go there, looks like some words about self-consciousness are in order.
Looks like your intuition drove you to explain consciousness as a very low-level physical process, somehow tied to our brain processes, which feed the sensory data to this tangible “self-being”. There are people who have a similar view and try to explain consciousness through quantum mechanics.
However, there is no indication apart from our intuition to tell us this to be the case. And furthermore, as we peel down to the lower abstraction levels of physical phenomena, the “components of nature” get incresingly simpler. The actual method of how this low-level “matter” springs mind always goes completely unexplained. In fact, “consciousness is a property of matter!” is the closest thing to an “explanation” I’ve ever heard from a proponent of this idea 
But then there are some very good reasons to believe just the opposite is true; that it is the very highest level of our learning processes that support our self-consciousness. Or “POV”, or feeling of existence, or “qualia” as some people like to call it (usually people who try to use it to proove our “self” is a tangible thing, not realizing qualia is just a fancy word for any conscious experience)
So, what is mind?
One way to put it is, that this is a problem of “predicting the motion of the mind from the underlying substrate”. If we make an assumption that the laws of nature are indeed explicit, there is nothing there that would explain how we are in control of ourself and our thoughts. Except that there is!
It is this ostensible conflict between explicit laws of nature and our subjective experience of self-control, that is the key point in understanding just what we are. After all the blood-shedding over semantical issues has subsided, it is here where the views finally separate. This is why I find it imperative to explain how does an explicit process become self-conscious.
A short answer;
Because of evolution, the animal branch of survival machines adopted such a survival tactic as to use nervous system to react rapidly to dangers in a dynamic environment. A sophistication of such nervous systems enables animals to observe the surroundings, and draw simple assumptions as to how the world around them works, and thus predict the unfolding of a potentially lethal situation (by interpretating the environment “correctly”), and dodge dangers before it is too late. Brains are basically learning machines, because evolutionary learning(/adaptation) within an organims is much more efficient than the “learning” which happens through DNA.
In humans then, this has gone so far that as we live and learn for a few years, finally we simply make a semantical assumption “I exist”, and thus we become to interpretate the information of the world around us (through our senses) as something that is “happening to me”
Mind you, all learning like this is completely semantical. For a baby, all the information that is coming through the sensory systems is completely alien at first. It bears no meaning as there is no information to associate it with. We are forced to build a web of association - a worldview - which is not sitting on any solid base at all. Meanings, associations, self-supporting circles of beliefs is all what it is.
A better (longer) answer I wrote some time ago is here:
http://tinyurl.com/9mjrc
I would write some things little bit differently from there now, but I hope you can get over it
(One thing that usually people find false is the example of rock rolling down the hill and an animal avoiding it. They think it is instictive to avoid falling rocks. But the matter of the fact is that we first need to actually learn something about physics and world before we know falling rocks are dangerous to us. And in any case, any example where you know you are consciously avoiding danger would apply there)
After you’ve read it, note that such semantical learning is likely to happen at a much lower level than we intuitively think. When we try to tackle any new task, we use our conscious abstraction level at first, but eventually the task falls down below our conscious level (and we perform it much more effectively). This is true for walking, talking, seeing…
Babies most likely to see the world upside down at first - if it is fair to say that their brain deciphers visual information from the eyes as “visual information” at all. There is nothing to base this strange stream of information on as no worldview has been built yet, and our brain is forced to build such a “circle of beliefs” that seems to make sense to base our learning on. One of such assumptions that “just seems to make sense” is how we see the world upright even though it is upside down on our retina. It just falls better in line with information from other senses that way. If you wear glasses with mirrors so that you see everything upside down, then after a couple of weeks you will see upright again. You have become so used to this task of inverting the image in your mind, that you don’t even perform it consciously anymore. (And when you remove the glasses, say hello to upside-down world again)
So, we are learning machines, and our brain just has drawn a logical assumption that “I” exist. We have come to possess a semantical concept of “existence” and “Self”. The concept of “self” is just a token in our worldview, and our worldview is what we use as the base to interpretate our sensory information.
The learning process itself IS EXPLICIT, but the worldview that gets built is COMPLETELY subject to our experiences. We are different people because we have had different experience of life. We feel in control of ourself and our thoughts as we have come to interpretate the world as something that happens to us, but strictly speaking, we are slaves of our knowledgebase(/worldview) as we make our decisions. Even the decisions to prolong a decision is something we make according to what we have learned and what we assume will work.
What we are, is basically our memories, or our worldview, as was mentioned in this thread. If our worldview was reset and built again from scratch we would not be the same person anymore in any real sense of the word. If our worldview was replaced by that of someone elses, our body would basically become to host a copy of this other person. Our own self would just disappear, or die if you will.
I am struggling to keep this short, but the matter of the fact is that this idea is so hugely interconnected to almost everything I see around me. It explains all the phenomena of the mind I can think of (split personality disorders, autistic savants…), and if you ask me, the “hard problem” is sufficiently solved.
I think that should get you started in seeing a different view of what we are than an unexplained sub-atomic layer.
What about all that duplication and killing stuff then?
Well, it is true that we are just our memories, but this does not mean I’d be happy to get duplicated and have “the original” killed.
If I agreed on that, I should also agree in a scenario where I get duplicated first, and then get killed the next day. “Hi, we duplicated you yesterday. Here’s another you. We will give this other guy a million dollars if you let us kill you.”
It doesn’t change this scenario if the killing and duplication occurs at the same time, or even if the killing happens first. If I get hit by a bus, and then my backup copy is awaken back at home, I, as an original learning machine, do not gain the experience of shifting into a brand new body. Even if my brainstate from the moment of death was restored, the original learning machine would not be affected.
I wouldn’t actually mind if I was being copied in this manner, but I would not choose to get hit by a bus just to get home faster either. The physical brain that is smothered all over the pavement is not gonna have an experience of getting home.
In other words, as a learning machine/process, find it important to actually have that experience of “shifting” onto someone elses “hardware” before I’d agree on the killing stuff. The original lacks that experience. The original is a completely different learning machine from the copy. The copy is not eligible to decide the faith of the original.
And furthermore, the mitosis example reveals something more about just WHAT we are. Suppose evolution had given us the capability of splitting into two just like that, then at the moment of splitting you would simply become two different learning machines. Both of them are you, a new you. No information would pass between them anymore, but killing either one would be just as sad as killing any person on this planet.
Neither one would see the splitting event as if someone got killed or disappeared. They would find it perfectly natural to split a worldview into two copies, and go to their own ways and learn new things in life. If they were to return back to one later, it would simply involve merging their worldviews back into one (resolving potentially conflicting ideas). You’d find it natural to explain to your friends “I was at the caribea as the Left-one, and in the antarctica as the Right-one. Turns out we both ate lobster at the same time, what are the odds?!”
(I’m not sure if the above is as plain to see for you as it is for me after I’ve dealt with my idea of consciousness for as long as I have… In any case, the only indication of my idea being “the truth” is, as always, it’s lack of paradoxical problems)
Although I must add, that the closest thing to a paradoxical problem arises when you consider whether it is possible to simulate mind by simulating the physical matter of a brain with discrete timesteps. In a completely virtual environment the rate of timesteps doesn’t make any difference either, we could update the system every 10 minutes if we wanted to.
The simulation doesn’t flow forwards like physical matter, but if the simulation is accurate enough, such a process/behaviour should arise that would basically insist it is experiencing qualia, just like we insist on it. Its behaviour would be the same as its real physical counterpart, and it would be capable of arguing at the internet to absolutely no end about whether it experiences qualia or not. And if it knew it was functioning through discrete timesteps, it might become convinced on this ground, that his qualia must be, in fact, an illusion. You might too.
Well, I have a lot to say about this, but I’ve been going on for too long already… 