Downloading Your Consciousness Just Before Death.

Having taken midazolam, which causes anterograde amnesia (you can’t form memories for a period after when you take it), It’s like you weren’t actually conscious at all. One minute you’re in one place, talking about getting the drug, and <blink!> you’re somewhere else. From your perspective, you skipped forward in time. From everyone else’s, a period of time has passed, and you interacted with various things, etc… A roommate got it for an esophageal endoscopy once, and I dropped him off and picked him up after the procedure. From my perspective, he walked right out, complained about his throat being a little sore, and then suggested we go grab lunch. We did, and then headed home, where he took a nap. Later that afternoon, he emerged, and asked me where we’d had lunch- he had no recollection of anything after they gave him the midazolam, and had deduced we’d got lunch because he was full, and had gone into the procedure hungry.

I suspect that from the perspective of one of the backed up and restored people, it would be similar- they’d remember everything up to the backup, and <blink> they’d be somewhere else.
Or if it was real-time, they’d recall having died, then <blink> they’d be somewhere else. In that case, it would (IMO) effectively be the same person, unlike the backup example, where there was a period of time where there were potentially diverging individuals sharing the same set of memories before a certain point.

I really have no idea what you mean by “computational”, and it pretty much certainly has no bearing on how consciousness works, particularly in a materialist system. In a materialist system consciousness is an emergent consequence of the physical behavior of the brain. The brain is physical and follows physical rules. It can therefore be simulated, by simulating those physical rules. Doing that in an accurate simulation will necessarily give you the same emergent behaviors, because causality. Which means you’ll get a mind simulated in the computer. Pretty straightforward, give or take the unbelievably massive amount of storage and processing power it will take to simulate the behavior of that much physical mass in detail. It certainly is theoretically possible, though.

No. I find your point incoherent.

I agree. The confusion here is that there seems to be an assumption that the consciousness can be loaded into a general purpose computer, whereas if you could perfectly emulate the brain and all its peculiarities and inputs the consciousness would come with it more or less automatically.

The standard science-fictional treatment gets this wrong also.

As someone who has written a lot of simulations, the biggest benefit I’ve found is that you can monitor the internals without interfering with the run. That’s not something you can do in real life. Running a test on a real IC is fast, but seeing inside is damn difficult. Not true in a simulated version.
The big win for a brain simulation based on simulated neurons would be looking to see what happens in different psychological states.

I very much doubt that what we think of as ourselves is only made up of our conscious minds. Seems to me that a great deal of thinking, remembering, etc. is being done by other portions of our minds entirely; and then interpreted by the conscious mind, which may (or may not) then claim the conclusion it came to was made by the conscious mind’s processes only.

And the mind as a whole is also influenced by the rest of the body – hormonal shifts, exhaustion, hunger, satiation, pain, exercise, physical joy, etc. all affect it.

So I think that what would be downloaded would only be a part of me; and would probably rapidly diverge even from that part of me as it is now, because it wouldn’t have the inputs coming from the rest of me.

And the me that’s the body (including the brain) would die when the body died. So no, sticking a bit of me in a computer wouldn’t give me immortality. Even if you could get all of me copied somehow, that still wouldn’t be immortality: this me would still die. (Would you be willing to have yourself copied, while in decent health, if someone were waiting to shoot you the minute the copy was finished?) Whether there’d be some other sense in doing one or both of those things I don’t know. I also don’t know how my conscious mind would take to being stuck in a box, even a moving one; but I find the idea uncomfortable enough that I’d hesitate to do that to a copy, or a partial copy.

Well, thanks to Turing-completeness, any “general purpose” computer could be made to run a simulation program that could handle the intricacies of emulating the brain/mind and the environment it will be reacting to.

That simulation program would probably take more than one CD to install, though.

Like your infinite regress.

One theory (quite an old one now) is the idea that the regression of subjectivity is a ‘strange loop’; you only need to have a self-referential loop which can examine itself.
Hofstadter came up with this idea in 2007 or thereabouts.

Perhaps it is only possible to create these self-referential loops inside a biological brain, or maybe only inside a human brain; but I do not see any reason to believe that, and I am surprised that you do.

You have the whole thing backwards. It is, ultimately, only the ability to interpret symbolic vessels that makes it possible to use any physical system to compute, or simulate, anything. So when you say that a system can be simulated, you’re already appealing to mental capacities.

Let’s leave out the middleman of computation, and talk about imagination instead. I can imagine robots, unicorns, trees, and even minds. Physical systems following physical rules—no problem there. By your argument, for a sufficiently powerful imagination, it should be possible to imagine an actual mind into existence.

So, well, imagination gives rise to minds! That’s that, then. Except of course nobody’s going to buy that: after all, I have just used a transparently mental capacity to explain the mental. That’s of course a no-go; but that’s exactly what computationalism does. That’s the point of my above example that’s being so studiously ignored.

Well, I’ve given the reason above. If you find fault with it, feel free to point it out.

(I probably shouldn’t tell him what it feels like as a writer to have your characters refuse to go along with the plot you’ve planned out for them.)

It’s patently obvious that that which happens in the imagination stays in the imagination. Similarly, your simulated person will have to have a simulated environment to run around in, or it will have nothing to interact with and probably go insane. I recommend simulating Vegas. What happens in simulated Vegas will stay in simulated Vegas - but that’s real enough for the simulated Vegans within it.

You still haven’t described why a human brain(which is a computer) can reflect upon itself, whereas a non-biological computer cannot.
Even if it turns out that electronic computers cannot support subjectivity (which is an unsupported assertion on your part) it should be possible to construct biological computers which can do it. But I doubt very much that will be necessary.

Please note as well that I do not actually think that mind uploading is a desirable thing- it could lead to a reduction in mental diversity if we could make copies of human minds, however imperfect.

I’m wondering if you’re familiar with the computational theory of mind and the fact that it’s currently considered to be a major foundation of modern cognitive science, although it has its detractors. If not, you might find the link interesting reading, or if you are, perhaps you can elaborate on your apparent view that it isn’t possible.

Or perhaps I’m misunderstanding what you mean by “computational”, but it’s fairly well defined in theories of cognition. In the briefest possible nutshell, CTM proposes that many or most (though not necessarily all) of our cognitive processes are computational in the sense that they are syntactic operations on symbolic mental representations. As such, these processes would have the important property of multiple realizability – the logical operations could be just as well realized on a digital computer.

Non-biological computers “reflect on themselves” all the time. It just doesn’t seem interesting since their self-diagnostic and operating systems are pretty simplistic, straightforward, and not prone to flights of fancy because they’re not designed that way.

(And designing them some other way would probably be pretty complicated.)

Why do you think this is impossible? Human minds have many subpersonalities; it seems entirely possible to imagine a fully-rounded alternate personality that shares your head but is quite distinct. They call it Dissociative Identity Disorder nowadays.

That’s why I don’t expect that artificially sentient computers will become practical for a very long time. In fact they might not ever be built, since there are probably much more useful systems that will be built first.

Ok, so is my post above where I gave an example of how computation is subject to interpretation, how there’s no sense to claiming ‘system x computes function y’ in an objective sense, just invisible to everybody?

I read it. It made little to no sense. I read it again. It continued to make little to no sense. I read your post responding to me which seemed to be trying to explain it again, and it still made little sense - though I responded to what vague sort of sense I seemed to detect within it.

Why don’t you pretend I’m stupid and restate your notion of “computational” in really simple, clear, and straightforward terms? We can worry about how it relates to the brain later, just get clarity on your definition of the term first.

The CTM is one of those rare ideas that were both founded an dismantled by the same person (Hilary Putnam). Both were visionary acts, it’s just that the rest of the world is a bit slower to catch up with the second one.

What is it about the simulated residents of Vegas that makes them so averse to eating simulated animal products?