Downloading Your Consciousness Just Before Death.

By “functional” I just meant “it works at all”. A non-functional brain is one that doesn’t have a mind in it.

Don’t try to impune upon me any particular cognitive model. All my position requires are the following three facts:

  1. Brains produce minds in the real world.

  2. Brains are entirely physical (and thus, mechanical) in their operation*.

  3. Computers with sufficient memory and processing ability can accurately simulate physical objects and thier mechanical behaviors.

That’s it. That’s all I need to prove that it’s theoretically possible to emulate minds - you may have to emulate everything in the world with even a tangential physical effect on the brain (which could include the entire universe!), but that’s still theoretically possible, and thus you provably can theoretically emulate minds on a computer.

Proven, that is, unless you can disprove one of the three premises. And note that I don’t care how the brains create minds. It doesn’t matter. I don’t care. It’s utterly irrelevant. All that matters to me is that brains produce minds at all.

  • Actually I don’t even require the minds to exist entirely in the physical realm - they just have to exist in some realm that has reasonably consistent mechanistic rules. Which isn’t really much to ask, because nothing can possibly work or even consistently exist in a realm without reasonably consistent rules.

Note: this is all distinct from the “normal” learning process that includes epigenetic changes to produce proteins that maintain the synapse.

Then it should be easy for you to point out what those differences are in the example I gave, where the physical system is completely identical in each case, and yet, can be taken to implement both f and f’.

No. My claim is, a system only computes once it is interpreted as computing. Computationalism says that only computations can interpret things (or exercise any other mental faculty). Thus, in order to compute, some computation must interpret a system as computing. But this is a vicious circle: if physical system P1 must be interpreted as computing C1 by system P2, and the only way it can do so is by implementing C2, then P2 must first be interpreted by P3 via implementing C3 as computing C2, and so forth.

Obviously, P1 cannot interpret P1 as implementing C1, as to do so, it would have to already be implementing C1 to do the interpreting.

Minds are the only sorts of things we know of that are capable of interpretation, so yes, there’s a little bit (you know, it’s key point) about my argument that’s specific to mind.

Exactly. Just as C1 doesn’t exist before P1 is suitably interpreted, and hence, can’t be what does the interpreting.

There are lots of hidden premises in your stance, but for now, I leave you with the fact that IIT is a theory satisfying your three premises on which it is nevertheless false that computation can emulate minds.

I don’t know what you mean by “identical”. If you mean “identical at the level of which electron is moving down which wire at the same time”, then you can’t be implementing both f and f’ at the same time and having them both producing the same single output. It’s literally impossible. It’s sort of like trying to pat your head and rub your belly button at the same time with the same hand. Or with two hands that are doing exactly the same actions on the same identical body, as the case may be.

Does your argument rely on something impossible?

Seriously, when we talk about calculations being a ‘black box’ where you can’t tell what’s going on inside it, it doesn’t mean that the internal calculations are experiencing multiple different program states simultaneously like schroedinger’s cat. That’s bizarre.

So what you’re saying is that if everyone who knows about a computer dies the computer magically stops working because it depended on the “interpretation” (which only human brains can do) knowing about it to function.

Yeah, kind of not feeling that.

That’s less a disproof of my argument and more a disproof of IIT - or your interpretation if IIT, as the case may be.

If there is a “driver’s seat view”, it would have to be the programs top level process that handles major data traffic control. Except, even that process is just making decisions on abstract symbols that it receives from lower-level processes, without any real understanding of what they mean, apart from what the application parametrics tell it about how to handle specific data. An entire program, that seems so useful and sometimes even collaborative, is just a massive rat’s nest of Half Man Half Wit’s box-of-switches-and-lights with all the switches and lights obfuscated into nanoscale silicon and metal traces. You can make it vast, but the fundamental structure is no different.

I put forth already, that the most sensible view is that this elusive thing is merely a manifestation of the survival instinct that is hardwired into critters (of which we are). It would explain the whole “immortal soul” concept, as in, “I don’t want to die, so it makes me happy to believe in an ectoplastic part of me that will persist forever in the Elysian Fields (or Asphodel, or Og forbid, Tartarus)”, and so far, I have yet to hear a more satisfying explanation.

We might find out once we are able to actually accomplish that, but as has been said, the function of a program is defined by externalities. Two exact copies of the same logic-processing system will only function exactly the same way for exactly the same set of inputs. Consider that the human brain is subjected to a constant, roiling stew of hormonal inputs, and unless you can replicate the biochemistry with precision, there will be distinctions. Not to mention the ever-present survival instinct.

When the word “mind” or “cognition” is used, is it assumed that consciousness is always included within those terms, or is “mind” and “cognition” ever assumed to be just the processing less conscious experience?

Our current programs keep things partitioned and isolated for simplicity and maintainability. That’s not strictly necessary - you could have a program’s main loop running directly through the input, processing, storage uptating, and output phases directly each time.

Or you could note that the thread(s) of execution do run through all those parts each time, constantly, moving in and out of all the layers and back again. It’s really a question of how (and where) the ‘seat of consciousness’ manifests, and how it '‘feels like’ from the inside - and whether the actual implementation being partitioned interferes with that at all. (Assuming it manifests in the first place, of course.)

But how does “survival instinct” actually “manifest”, physically speaking? Some part of the entity constantly checking the senses for threats, conferring with the memory about the threats, and prompting reactions to the threats?

Survival is just a goal. A goal that is conducive to there being future generations, yes, but the thing that has the goal is what I’m curious about. The thing that has the awareness of the situation to see threats and avoid them. How does it work?

Between “magic” and “program execution loop”, I’m thinking “program execution loop”.

If you’re going to emulate minds in the computer you have two choices: let the minds observe and react to the computer’s inputs directly, or build a virtual “Matrix” (referring to the movie) for them to exist within. I’m thinking the Matrix approach would be way more pleasant for them - computers themselves tend to be completely hamstrung with regard to input and output. Very 'Have no Speaker and I Must Scream" kind of thing. Plus of course the Matrix approach lines up nicely with the simplest, most brainless way to go about emulating minds - emulate the entire freaking room the person’s in, and get the brain (and mind) as a bonus. Expanding that to include a massive multiplayer world for them to walk around in is just a problem of scale.

Once you’ve got your Matrix for the minds to live in, the minds ought to be able to get all the inputs they’re accustomed to just fine, presuming you designed the Matrix properly and with sufficient detail. Though of course you’d eventually want to tweak a few things - the whole goal here was to let the minds live forever after all. So once you have things all properly uploaded into a simulation you’ll probably want to tweak a few things rather than accurately simulating the ravages of time and all. This would of course cause the minds in the simulation to diverge in thought and action from the originals, but really, wasn’t that the point all along?

I can’t speak for anyone else, but I believe all the terms all refer to the “seat of consciousness” - the “I” in “I think therefore I am”. When we look out at the world through our eyes, the mind/cognition/consciousness is the thing doing the looking.

But it seems like we could have a brain that successfully models the environment and produces appropriate behavior without having conscious experience.

I never liked the whole zombie argument before, but I think I see what that argument is getting at.

I’m of the personal opinion that it’s incoherent to think that full human reaction and interaction can be achieved by a so-called “zombie” - the mere act of observing the world, interpreting the world, and reacting to the world in an ongoing and consistent way requires that the entity be aware of its environs, itself, and its opinions and memories, in the same way that a car needs some sort of engine to run. (Could be a hemi, could be a hamster wheel, but it’s gotta be something.)

I just remembered why i didn’t like the zombie idea, because it requires the person to be identical except for experience.

But what I was picturing in my mind was that the operational attributes of the brain (e.g. modeling the environment, making decisions towards a goal, etc.) can be independent of the conscious experience, if the conscious experience is just a layer above that maybe only influences deciding on the goal (for example).
Responding to your post:
There are examples where people seem to operate correctly in their environment but don’t have awareness (e.g. sleep walking).

I dunno, I feel like my conscious influence on my actions is pretty strong. I don’t feel like a passive passenger in my own head (though I suppose my subconscious could just be pretending to let me lead, like my mom does with my dad).

There’s pretty good reason to think that the human brain has two actively running processes (at least) which occasionally compete for control. And apparently if you mess with the brain physically you can split it down the middle and get two different cognitions operating in the brain at once!

Brains be complicated, yo.

The more I ponder this stuff, the more I question that assumption. When I look at my behavior patterns and the behavior patterns of people I know, it sure looks like we are operating on machinery that is significantly and consistently driven by the same patterns over and over.

Even though it does seem like we can react to our environment and make choices, it sure seems like that choice process is much more of a formula highly constrained by the machinery (due to nature+nurture), as opposed to an open ended consciousness based selection process.

The only reason I would make a different decision (than the ones I typically make) is if there was an explicit input identifying the fact that a non-standard decision is being targeted/tested and therefore I should choose that path to prove it can be done (based on my internal motivation that would push me towards even evaluating that condition).

You say that like it doesn’t make perfect logical sense for people to fall into patterns. In actual fact I consciously choose to maintain my patterns, because I like the same stuff today as I did yesterday.

I’m not passing a value judgement on it, I’m just saying that my analysis is leading me to believe that the patterns that drive us seem like they are stronger and further below the conscious level than I previously assumed.

I guess my only real response is, brains be complicated, yo!
Though, to say on-topic, complicated doesn’t mean uncopyable. It just means that when we replicate all the neuron-states and chemical soups, we might not know what they’re doing even as we move them over.

One fun (though debatably ethical) thing we could do once we were emulating the brains would be to sick an AI on them and have it make multiple copies of the emulated brains and selectively tweak/remove physical elements and compare ongoing behavior afterward. We could find out which aspects of our brain’s physicality are necessary for proper mind function real quick. (Hmm, removing all the blood had an adverse effect. Guess we needed that. Next test!)

Given enough whittling we might be able to emulate a mind without emulating the whole brain - just the parts and processes that actually matter. (With the brain matter’s physical weight or the skull enclosing it being possible superfluous factors that could be removed, for example.) In this way we might be able to emulate minds more efficiently than fully emulating all the physical matter in the vicinity.

If an artificial neural network like the ones used in AlphaGo starts from essentially zero skill and proceeds to play chess or Go at a championship level after a period of deep learning, where in the original base configuration can you find any “Go-like” strategy knowledge? Trivially, the game rules are built in to the program, and trivially, one might guess that building neural connections might lead to some interesting phenomena, but what could you possibly see in the components in their initial state that could lead you to make confident predictions about skill at that particular game?

It would be fair to ask, for course, what the developers saw in it, and why they built it that way. The answer is that they saw only a general-purpose learning mechanism, not something that bore any of the specific primordial traits of what they hoped to achieve. Just the same way as they built a massively parallel general-purpose computer system to run it on. What actually came together was something qualitatively new, and something that many had believed was at least another decade away.

(Emphasis mine.) Oh, my. You absolutely certainly have done exactly that, many many times throughout this thread:
But if minds then have the capacity to interpret things (as they seem to), they have a capacity that can’t be realized via computation, and thus are, on the whole, not computational entities.
https://boards.straightdope.com/sdmb/showpost.php?p=21646134&postcount=18

Well, I gave an argument demonstrating that computation is subjective, and hence, only fixed by interpreting a certain system as computing a certain function. If whatever does this interpreting is itself computational, then its computation needs another interpretive agency to be fixed, and so on, in an infinite regress; hence, whatever fixes computation can’t itself be computational.
https://boards.straightdope.com/sdmb/showpost.php?p=21646502&postcount=34

The CTM is one of those rare ideas that were both founded an dismantled by the same person (Hilary Putnam). Both were visionary acts, it’s just that the rest of the world is a bit slower to catch up with the second one.
https://boards.straightdope.com/sdmb/showpost.php?p=21646610&postcount=59

Ignoring the incorrect assertions about Putnam that I dispelled earlier, waffling over “category errors” is disingenuous and meaningless here. The position of CTM isn’t that computational theories help us understand the mind in some vague abstract sense; the position of CTM is that the brain performs computations, period, full stop – as in the basic premise that intentional cognitive processes are literally syntactic operations on symbols. This is unambiguously clear, and you unambiguously rejected it. The cite I quoted in #143 says very explicitly that “the paradigm of machine computation” became, over a thirty-year period, a “deep and far-reaching” theory in cognitive science, supporting Fodor’s statement that it’s hard to imagine any kind of meaningful cognitive science without it, and that denial of this fact – such as what you appear to be doing – is not worth a serious discussion.

It’s a response to your accusation that “the main determining factor in whether or not a dead philosopher is worthy of deference is whether you agree with them”. No, it isn’t. Jerry Fodor was widely regarded as one of the founders of modern cognitive science, or at least of many of its foundational new ideas in the past half-century. Dreyfus wasn’t the founder of anything. I asked someone about Dreyfus some years ago, someone who I can say without exaggeration is one of the principal theorists in cognitive science today. I can’t give any details without betraying privacy and confidentiality, but I will say this: he knew Dreyfus, and had argumentative encounters with him in the academic media. His charitable view was that Dreyfus was a sort of congenial uncle figure, “good-hearted but not very bright”.

I’ve been absolutely consistent that computationalism and physicalism are not at odds, and I disagree with your premise that they are. Nor do I believe, for the reasons already indicated, that Chalmers’ view that what he calls “strong emergence” need be at odds with physicalism. My evolution on this topic is that I’m doubtful that there’s much meaningful distinction between “weak” and “strong” emergence.

It’s worse than that. Even talking about some Boolean formula or algorithm to emulate consciousness is silly because it isn’t even a behavior, it’s an assertion of self-reflection. The truth of the assertion may or may not become evident after observing actual behaviors. My guess, again, is that the issue is less profound than it’s made out to be. It wouldn’t surprise me if at some point in the future, generalized strong AI will assert that it is conscious, and we just won’t believe it, or will pretend it’s a “different kind” of consciousness.

How? Well first we have to find out what it is. How do candles work? You have to understand several “what” vectors in order to arrive at a useful answer.

I suspect self-awareness/the survival instinct are analogous to something like hair: it is there, many of us are rather fond of it, it has its uses, but it does not actually do anything. It is an interesting feature.

But a feature of what? Hair is a simple thing that is a result of follicular activity. Self-awareness seems to be a rather complex feature that probably arises from disparate sources (some most likely chemical), and may not be localized (just like hair).

The point is, it is not evident that it actually does anything. Kind of like data tables which, in and of themselves are not active (in the way that program code is active), but our mental processes take note of it and adjust their results to account for it.

So, would self-awareness be a valuable feature for intelligent machines? Perhaps. Then again, maybe not. If we just want them to do what we need them to do, strict functionality might be the preferable design strategy. Unless uncovering the underlying nature of self-awareness is the research goal, in which case, they are probably best confined to a laboratory setting.