Downloading Your Consciousness Just Before Death.

No, a movie is not a process. To get closer to the area of discussion, we have ways of making a movie of the bits flowing through data paths in an integrated circuit. That movie of the process the computer is running is not a process. I’ll agree with you that this description is never going to lead to consciousness nor intelligence, no more than an animated thing will ever come to life.

Why does a description have to be transferred? I described a system I build in a document. Is it not a description because no one read it? (Which is closer to reality than I like to say.)

Writing and reading the description are of course processes, so is emailing it.

Ah, that was a nice extended weekend. Hmm, the thread has run far in my abscence; I’ll just make a few notes:

Take no solace, for this is wrong. I think it’s entirely possible that it’s possible to create a consciousness that perceives itself the same way that we perceive ourselves by running a computer program.

In fact I believe that any physicalist model of the universe (that is, any that doesn’t involve ghosts) requires that emulation of consciousness be possible. Physical reality follows regular rules and can be modeled. If the brain and everything else that creates a mind is contained in physical reality, then it’s a simple fact that the physical processes that creates the mind can be reproduced, with all of its side effects (including a ‘seat of consciousness’), in a sufficiently detailed and accurate simulation.

I hate to have to point out something this obvious, but “WOW” is just “MOM” upside-down. By standing on the other side of the table the paper is lying on -by looking at the output from a different point of view, the output is interpreted differently.

If the notion of flipping the paper upside down is too complicated for you, consider a piece of paper with the following printed on it: 101. What does it mean? A hundred and one? Five, in binary? Two hundred fifty seven, in hexadecimal? Who knows? It’s a matter of interpretation, just exactly like and completely analogous to your box.

So the paper, just like your box’s output, is entirely static, but multiple interpretations are possible. The difference is only in the mind of the observer interpreting things. Same as your box example. The box itself is doing a single computation/calculation/whatever and its output is the same for a given input. There is no ambiguity in the computation/calculation/whatever that the box is doing - the same way there is no ambiguity in which areas of the WOW/MOM or 101 papers have ink on them and which ones don’t. The paper is directly analogous to your calculating box - in both cases there is only ambiguity in the eyes and mind of the observer/interpreter.

And in my opinion the ambiguity in the eyes and mind of the observer/interpreter has absolutely squat to do with the behavior of the box. Including the observer’s interpretations in your definition of “computation” is nonsense that I’m not playing along with, and your claims that the operation of the mind would be circular if done by a calculating machine rely on using that nonsensical definition of “calculation” and are thus also nonsense. The very structure of your example highlights your error by clearly separating the calculation from anything uncertain - and making it so that I can accurately reconstruct your argument around a piece of printed paper and then prove that the interpretation ambiguities in your argument have nothing to do with the calculation occurring inside the box.

Now, if we wanted to get away from your stupid argument and stop separating the interpretation from the calculation, we can see that interpretation is part of many calculations, but there is no ambiguity introduced by that, because the calculation process is not going to be fluctuating and changing how it interprets things midstream. That was the point of asking you if your wiring was deterministic or schroedingerian, which you confusedly interpreted as me thinking you needed to lay out the wiring precisely - the point is that because there’s no ambiguity in the calculation there is also no ambiguity in the way the calculation interprets things.

As I’ve noted, I’m a computer programmer. It’s extremely common for me to store encoded values. 0=invalid and 1=valid. 0=valid and 1=invalid. That sort of thing. Now, look at those two value mappings - they’re entirely contradictory. If you were using one to examine data that was stored the other way, you’d get everything wrong. But that doesn’t happen because the calculation knows which interpretation it’s using. There’s nowhere in the closed system of the calculation for the meanings to get lost because it’s a closed system, and a specific interpretation is correct because that’s the interpretation that the system happens to be using.

Which is to say that the determination of the correct interpretation isn’t circular; it’s arbitrary. There is a difference.

If you have a computer program that can be equally said to simulate a brain or a tornado (because the interpretation of the computation is up to the external agent), does consciousness exist even when we interpret it as simulating a tornado?

In a materialist system, if consciousness exists it exists within the point of view of the thing housing the consciousness. Which is to say, the consciousness has a ‘seat of consciousness’ and is independently aware of its existence and any surroundings its containing object gives it the senses to perceive.

If this is happening in your simulation, then it’s happening. It doesn’t matter if some outside observer is aware of it or not. You could interpret it as a brain, a tornado, a blizzard of 1s and 0s, and that won’t effect what the contents of the simulator are aware of.

If I look at a person and fail to recognize that they’re self-aware (perhaps due to their extremely convincing tornado costume), that doesn’t mean they’re not self-aware. The same goes for any self-aware simulations you might run across.

It seems like you agree with these two statements:
1 - The 1’s and 0’s of the system require an interpretation to decide whether it’s a brain simulation or a tornado simulation
2 - The interpretation does not impact whether consciousness has arisen or not

Which leads to this question:
How do we decide which sequence of 1’s and 0’s can create consciousness?
Is there any value to modeling it after the brain, because it might be easier to just create a tornado simulation, or even better, randomly generated code.

If randomly generated code does not seem like a good approach, then what is it exactly about the randomly generated code that is any worse than any other program when we are trying to create consciousness?

Well, the thing is, a function like f is something that we routinely take ourselves to have computed. We say that a calculator adds numbers; this refers to computing f to the exclusion of any other computation. We don’t say that a calculator takes numerals to numerals; we take a calculator that adds 3 to 4 and obtains 7 to have done fundamentally the same thing as one that takes III and IV and returns VII—both have added the same numbers, just expressed differently.

On your construal of computation, I just don’t see how that claim could ever come out to be true. Or are you claiming that it never is true? That we’re using some ‘folk notion’ of computation when we’re making such claims, which on closer analysis is seen to be false? And furthermore, that the ‘true’ notion of computation is just basically over the symbols sans interpretation?

But then, how do we ever get to meaningful symbols? How do we get, say, from pictures on a screen to the planets in the sky? No computer ever outputs planets, yet, we seem to be doing alright simulating the solar system.

What is the process by which we take what you claim are just manipulations on symbols with arbitrary semantics to concrete physical objects like planets, or abstract objects like numbers? Is it computational? If so, then why doesn’t the computer just take care of it by itself? And if it isn’t computational, then, of course, minds must be capable of doing something that’s not computational. So what gives? How come I can take a system and compute addition, or simulate the movements of planets, if the system itself can never do anything but shuffle meaningless symbols around, but I don’t do anything the system couldn’t do itself, as well?

Not quite, though. We have 2[sup]4[/sup] = 16 different input states, and 2[sup]3[/sup] = 8 different output states, so the number of different functions between them is 8[sup]16[/sup] ~ 2,8 * 10[sup]14[/sup], which is perhaps somewhat large-ish, but not really infinite. You’re right that one might relax this somewhat, but, in my experience, people are really resistant to ‘arbitrary’ interpretations (most would not agree that one can take one switch being ‘up’ to mean 1, while another’s ‘up’ may be 0, even though there’s no intrinsic problem with that).

It really isn’t, though. It’s a well-worn stance in philosophy, known as unrestricted pancomputationalism.

As an analogy, take the word ‘dog’. It means something like a small four-legged furry domesticated animal, with various additional qualifiers to pin down the meaning more accurately. But there’s nothing about ‘dog’ (the word) that makes it mean that. It could equally well mean cat, or bird, or rock, or any of the infinitely many things that there are in the world: the association between a symbol and its meaning is arbitrary.

The analogy of your stance would then be that that’s clearly absurd, that ‘dog’ can’t mean infinitely many things. But there’s simply no reason to think so. In any given usage, it means whatever we take it to mean—in the same sense, a computer computes whatever we take it to compute. There are no infinitely many computations lurking in the shadows, anymore than there are infinitely many meanings of ‘dog’. It’s just an instance of convention, of interpretation. All I’m claiming is that what holds for the symbols of natural language, likewise holds for the symbols of physically instantiated computation.

Why that table, though? Why not this one:



 S11 | S12 | S21 | S22  || L1 | L2 | L3
---------------------------------------
  0  |  0  |  0  |  0   ||  1 |  1 |  1
  0  |  1  |  0  |  0   ||  1 |  1 |  0
  1  |  0  |  0  |  0   ||  1 |  0 |  1
  1  |  1  |  0  |  0   ||  1 |  0 |  0
  0  |  0  |  0  |  1   ||  1 |  1 |  0
  0  |  1  |  0  |  1   ||  1 |  0 |  1
  1  |  0  |  0  |  1   ||  1 |  0 |  0
  1  |  1  |  0  |  1   ||  0 |  1 |  1
  0  |  0  |  1  |  0   ||  1 |  0 |  1
  0  |  1  |  1  |  0   ||  1 |  0 |  0
  1  |  0  |  1  |  0   ||  0 |  1 |  1
  1  |  1  |  1  |  0   ||  0 |  1 |  0
  0  |  0  |  1  |  1   ||  1 |  0 |  0
  0  |  1  |  1  |  1   ||  0 |  1 |  1
  1  |  0  |  1  |  1   ||  0 |  1 |  0
  1  |  1  |  1  |  1   ||  0 |  0 |  1


Why not any of the other possibilities?

On my interpretation, the change in symbols may change the computation, because there’s a further fact of the matter (given by interpretation) regarding the question what is being computed. On your interpretation, changing the output symbols must change the computation, since the output symbols are all that matters for individuating a computation.

Reductio ad absurdum doesn’t mean point out a consequence you think is uncomfortable, but rather, to point out an inconsistency. While it’s maybe strange, there’s nothing inconsistent about a system (potentially) implementing any computation whatsoever. Besides, I don’t actually think they do: there are constraints by the structure of the system. But that’s an argument that’s still a ways down the road from where we are right now, I’m afraid.

You neglected the meaty part of my argument, though—once I compress a movie, its reproduction becomes a process. Do therefore the things shown in the movie gain reality?

So, are you no longer claiming, then, that “All that reality “computes” is particles moving around”? Because that’s diametrically opposed to there being a program such that it produces consciousness, and needs a claim (generally thought to be false, and certainly immensely problematic) that consciousness is just particles moving around, rather than, say, the functional properties of those particles.

There are several claims that are problematic, here. (I mean, problematic for the people who study this sort of thing, obviously; you seem to have the ability to just see what’s true and what’s not, so this is mainly for the benefit of those who, you know, rely on arguments and that sort of thing.)

For one, there are conceptions of physicalism on which it’s not the case that emulation entails realization (say, of mental properties). That’s for instance the case on the identity theory I’ve mentioned: if mental properties are identical to (say) neuronal properties, that provides no grounds to believe that they could be instantiated by simulation. (And again, of course, IIT forms an explicit counterexample to this claim.)

Further, the idea that a model instantiates all the properties of the thing it models is problematic. We agree (I presume) that a description doesn’t actually require, or even cause, the reality of the thing described. But it’s not clear where the terms ‘model’ and ‘description’ diverge—take my earlier example of just successively compressing a movie, until it basically becomes a simulation of the thing shown (this sort of thing can actually be done). So from this point of view, there is simply no reason at all to believe that a simulation of a brain would be conscious, or a simulation of a universe would itself be a universe, anymore than to believe that a description of same (even a unbelievably hugely detailed description) would instantiate the requisite properties.

So the sort of conclusion you want to draw simply doesn’t follow: there are counterexamples, and no actual reason to believe your claims.

The point it that if I read it as ‘MOM’, it has different syntactic properties from when I read it as ‘WOW’. For one, if I were to just read it aloud, I would make different sounds, and my production of these sounds could be entirely described as being directly causally related to the way I take the piece of paper to be oriented.

Think about a box with a matrix of switches, say a 100 x 100 square. If I press them in a ‘WOW’ pattern, the box is likely to do something else than if I press them in a ‘MOM’ pattern. Likewise, the retinal and consequent neuronal activity of me seeing ‘MOM’ is different from me seeing ‘WOW’.

On this, I agree (it’s my ‘gift’ example from earlier, or the ‘dog’ example I’ve given multiple times now). The reason is that 101 stays in every case syntactically the same while differing in its semantics. That’s the core issue here.

This is true.

This isn’t. Because, again, we don’t take ourselves as computing symbol patterns, we take ourselves to be computing sums, say. But it’s only a sum once you fix an interpretation of the symbols.

Since you’re bumping up against the same confusion, let me just repeat what I’ve asked wolfpup above:

So how does anybody ever compute a sum? How does anybody ever compute a square root, or a rocket’s orbit? Your notion of computation would mean that all a computation ever produces are blinking lights. But blinking lights for me may be something entirely different from blinking lights for you.

That’s a nice attempt at a bit of good old revisionism, but I’ll just note that this:

Was something you explicitly disagreed with.

Yes, but no need to apologize, I’m not holding it against you.

The calculation reacts, of course, ultimately to voltage values, and it reacts the way you’ve told it to; and that it reacts differently to different voltages is really no surprise (but then that’s again the distinction between the ‘WOW’/‘MOM’ example and the ‘gift’ example you seem to have trouble with). It’s you who’s interpreting these voltage values as 0 and 1.


Anyway. I’ll be leaving for vacation today, so I probably won’t be back to this thread for a while. But I think that, by now, we’re pretty clear on what the fundamental problem is. The general stance is, against my claims, that the box I’ve proposed really only implements one computation, which is given by its physical evolution, and that my functions f and f’ are somehow irrelevant or hallucinatory embellishments of that physical evolution.

Of course, this is hugely problematic as a foundation for computationalism as a distinct stance in the philosophy of mind, but no matter. I think there’s quite another way to see the issues with this stance. Because typically, we think that we compute things like addition—like my function f. Additionally, f is a sensible computation on any formalization of computation.

So those that claim that computations can be uniquely instantiated physically, have a chance, here, to prove that claim: simply describe a machine that successfully computes f, in the same way that my box computes whatever you take it to compute. If nothing else, that will at least suffice to clarify the notion of implementation you hold to be the right one.

So that’s my challenge (in particular to begbert2 and wolfpup, but everyone can play): describe a machine to instantiate f. Else, if you can’t, try to explain how and why we take ourselves to compute f, if we don’t have such a machine. In that case, you’re stuck having to explain how we do so: either by computation—then, why can’t that computation be done by the box?—or not—in which case, computationalism is false anyway.

After all, it’s you who’s making a claim—that minds can be computationally instantiated—so it ought to be you substantiating it.

Now, of course, I have a pretty good idea of how this will play out: you can’t describe such a machine (after all, my box is exactly an example of a machine that one would consider to implement f), so you’ll either waffle, or refuse to play. So what I’m really interested in is to see the sorts of justifications you come up with to not have to meet my challenge. But, then again, maybe I’ll be surprised—who knows!

Technically speaking, if the system has consciousness it doesn’t require an outside observer to make decisions about whether it’s a consciousness or not, because it can do that itself! It will have its own opinion about whether it’s self-aware and it probably doesn’t care what you think, unless your opinion about its sentience is the only thing preventing you from incinerating it in a pot of Beezle-Nut oil. (“We are here! We are here! We are here!”)

But yeah, if I am walking along and see a human wearing one of those fake cardboard tree costumes you see in elementary school plays, and I glance at the costumed human and interpret him as being a real tree, then I indeed will have failed to recognize the human as being a conscious entity. And you’re correct; I’m of the opinion that my misidententification of things I observe doesn’t transform them into real trees. Misinterpretations by observers do not transform the observed things.

This is sort of like asking whether, when you’re attempting to bake a delicious cake, is it useful to model it on cakes you know about, or whether it’s better just to throw a bunch of random stuff in a pot and start stirring. Yes, it’s possible that tossing together the collection of knickknacks on top of your desk and stirring them will make a delicious cake, but it might not be the surest approach. (I’m actually being serious here; I’m not much of a cook and I don’t know what you have on your desk. So who knows? Maybe your desk is a cake in the making.)

If I wanted to make a delicious cake, I’d say that a way to be sure you’ve created a delicious cake would be to take an existing delicious cake and copy it at the submolecular level. (As one does.) In this way I don’t have to either invent or stumble onto a working framework for deliciousness; I’m copying something that already works. And as a computer programmer, copying something that already works is a way more certain way of getting what you want than figuring it out yourself.

And how exactly do you do that when you are trying to create consciousness by using a different physical medium than the original?

That is the key point: which aspects of the brains transformations cause consciousness and how/can that be mapped into 1’s and 0’s?
The problem is very different from the cake example of duplicating it using the same medium and a very low level of similarity.

That comment was lampooning your argument, which is to say it is your argument. You’re stating that if somebody was to look at the universe (or, say, a living human) and announce “I’m going to just think of you as a pile of particles”, then the universe (or person) would cease to be anything other than a pile of particles. Or maybe the fact that there’s a person out there deciding to interpret reality funny causes some sort of infinite regression somehow and causes all of reality to disappear in a puff of logic.

If your argument can do it to a calculation, it can do it to reality - there’s literally nothing about your argument as presented that prevents the magic of variable interpretation from being applied to things other than calculations. Honestly it’s a miracle we’ve made it this far without somebody making a different interpretation of something and destroying all reality.

Yep, we’ve already discussed IIT, I believe. That’s the one where physical matter has souls, right?

Look, I get that people can declare that they refuse to believe that computers emulate consciousness, or deliciousness, or emotion. It’s a faith thing. I just don’t think that their baseless declarations will make a whit of difference to any simulated entities that we happen to create as they eat their delicious cakes and enjoy every bite.

I’m super-not interested in explaining to you what a simulation is, since you’re clearly having a problem with that. But I suppose I should make a token effort.

Simulations attempt to replicate behavior. Which behaviors and properties they emulate depend on what effects they’re trying to replicate - a 3D renderer attempts to replicate the behavior of light but not heat, mass, or gravity. When you emulate things you get emergent behavior of the things you’re emulating, like how you get shadows and reflected images as a side effect of emulating how light bounces, is blocked, and is absorbed. You don’t get emergent behavior of behaviors and properties you’re not emulating - the rendered image doesn’t show the things tumbling to the floor.

In your film example it of course doesn’t become “basically a simulation” of the thing shown - that’s transparently stupid for numerous obvious reasons, firstly being that the movie doesn’t even capture a full image of the thing (like from behind), and certainly doesn’t capture things like mass and innards.

Stupid counterexamples don’t support your case.

Yeah, the core issue is that your argument relies on merging the computation and the interpretation into a thing you (for some inexplicable reason) call a computation, and then thinking that by confusing the terms you can prove things that you most certainly can’t.

Tell you what - I’m just going to call “computation + interpretation” “computation*”. Without the asterisk it means the deterministic operations that are going on inside the box, which are trucking on completely unaffected by the observation. With the asterisk it’s the version where it’s an interpretation of the output of the box by the observer.

By clearly distinguishing which (re)definition of the word we’re talking about, we can hopefully cut down a bit on any bait and switch fallacies and sophistry.

Yep, that’s the part you got confused about. The internal wiring is utterly critical to which computation is taking place. It determines it! Sure, there are other ways to wire it that result in the same output mapping, but there are others that don’t, and there are others that do but don’t have the same internal properties.

I mean, we are still talking about consciousness, right? If you talk to somebody through an intercom, or if we play back a recording of what you said later to a different person, both you and the recording produce the same output but the internal behavior was different. Contrary to stupid counterexamples the recording doesn’t become sentient just because it sounds the same as you for a little while.

Nonsense - the program is explicitly recognizing that 1 (or 0) means “invalid” and reacts differently, carrying out its error handling routing. It’s very explicitly the error handling routine, and it remains the error handling routine even if somebody loftily says “I refuse to recognize that the computer exists as anything other than a pile of unrelated particles, and thus refuse to recognize that the error handler, or the program, or the computer it’s running on, even exist”. Lofty person can say anything they like, but the program knows differently and doesn’t give a crap what they think.

To put things in the terms we’re using, the program is the computation, and its interpretation of its 1s and 0s is its computation* about its previous computations. The lofty person’s silly interpretation is a different computation*. The lofty person’s computation* doesn’t change, disprove, or disintegrate-in-a-puff-of-logic the computation* carried out by the computation.

The box only implements one computation. Computation*s are produced by the observer on their own time when they look at the box, and there could be a different one for every observer.

Just to be clear, every time you say “computation” or “compute” in the above, you’re talking about computation*, not computation. Which is to say you’re talking about the behavior of the box plus the interpretation of that behavior by an observer.

Obviously, computation* depends on the observer, so it’s impossible to create a box that does the job of the observer. That’s like creating a delicious cake that makes somebody happy about its deliciousness when there’s nobody around.

…unless the cake eats itself.
…unless the box observes itself.

I’m not going to bother scrolling up and rereading your specific function f and figuring out what you want f to specifically mean, but I vaguely remember it involved the observer interpreting the lights as integers. Integers, of course, can be even or odd.

So suppose your box went through it’s computation to light up the lights - and then didn’t stop there. It then took which lights it had lit up and then did another computation on them to determine if the number was odd or even, and stored that result in an internal log (with 1=even and 0=odd, because I’m an ass). This computation would of course rely on the box interpreting its own output in a particular way - and the way it uses to do that interpretation is f. I believe that this interpretation of the outputs meets your cockeyed definition of a computation*, so this is a box that produces/employs computation* f.

Now, is this going to stop somebody else from wandering by and interpreting the lights differently? Nope! But it doesn’t really matter. Them doing so isn’t going to cause the box to disappear in a puff of logic.
Oh, and have a good vacation! I like vacations. Vacations are awesome. So enjoy your vacation!

When a person dies their cognition stops, right? That means that cognition isn’t an inherent property of the mass of the brain simply existing and being in that configuration, because a person can die and leave the brain intact. This means that cognition, like life, is something somehing does, not something it is.

Which means the medium doesn’t matter. Only the behavior matters - and simulations can perfectly emulate behavior.

As for which aspects of the brain’s behaviors cause consciousness, the whole point of emulating the whole freaking brain at the submolecular level is so we don’t have to know that. Just as one speaks of throwing out the baby with the bathwater, whole-brain emulation is recreating the entire bath just to make sure you get the baby. If we actually knew which operations created consciousness we could do it way easier and more efficiently; replicating the whole brain’s behavior is the brute force approach.
Oh, and do you wanna hear my silly theory of the day? Execution loops cause consciousness. All execution loops causes consciousness. Every computer program you’ve ever run creates and destroys one, dozens, or millions of threads of consciousness. It’s a slaughter!

Of course, most of these threads of consciousness aren’t given access to much in the way of memory, inputs, internal state - memories, senses, or thoughts. Terminating such a consciousness would be terminating something less than a bug, and the termination of course wouldn’t inspire anything analogous to pain either.

Of course there are people who think there are moral implications to terminating consciousness. But I had a slice of ham with dinner last night, so I clearly have no problems with consciousnesses being created and subsequently terminated entirely for my own personal benefit. So running and terminating computer programs is no problem for me!

Yeah - simulating is probably easier than physically assembling a real thing at the submolecular level.

Isn’t this what Chalmers theories boil down to in the end?

I still haven’t seen a convincing argument that uploading a brain is impossible. I also haven’t seen anything convincing me it will happen anytime soon as we really seem to suck at this stuff so far.

I haven’t read David Chalmers, but per his wiki page he seems to be a proponent of philosophical zombies, which would seem to be antithetical to the idea. It’s not even slightly conceivable that a zombie could function without continuously processing and reacting to its input, which would require an execution loop by definition. If literally every execution loop causes consciousness, then by definition there can be no such thing as a philosophical zombie.

Also, as a side note, the wiki page claims that his argument for the possibility of philosophical zombies is that because they’re conceivable they must be logically possible, which may be the stupidest thing I’ve ever read. I do hope he’s being misrepresented because that makes him sound like an idiot.

Most of the counterarguments seem to boil down to beliefs that cognition is literally magic, and thus isn’t a behavior that can be replicated.

That’s not his argument at all.

His argument is that there is no clear link that can be made between computation and consciousness because computation is the manipulation of symbols independent of their interpretation or meaning, and there can be multiple interpretations for any computation.

If there is no clear link, then you can’t state (or prove) that computation alone is sufficient to create consciousness.
He shows this with a box example that requires an interpreter to determine which function is being performed and extends it to the human brain by pointing out that the thing doing the interpretation for consciousness must be external to the computation itself (per the definition of computation), which means it’s not really just the computation that is responsible for consciousness (it also requires something to do some interpretation).

In terms of the materialist position, though, would localization be a requirement? I mean, that is the nature of consciousness with which we are intimately familiar, but, if there is the possibility of machine consciousness, why should we assume that it would assume a familiar form? Given that computing machines have rather different, perhaps more efficient methods of communication, one might guess that self-awareness could emerge, assuming emergence is how it develops, as a more diffuse, less singular property.

What it it were to emerge but we were not equipped to recognize it? And if machine consciousness will only compatible with a non-localized presence, would that ultimately make it impossible for us to perform transfers of our own consciousness to and from storage due to compatibility issues?

His argument is indeed that due to the fact that by separating the computation device from the interpreter you can introduce ambiguity. From this ambiguity (introduced by a separate observer) you can somehow produce an internal contradiction somewhere, that disproves something - according to his argument.

In actual fact, of course, when you have a computation observing itself (which is what we’re talking about with brains), you do certainly have parts of the system interpreting the output of other parts. This is present in literally every computer program ever. The part where his argument collapses into rank, obvious stupidity is where he thinks that this need to establish interpretations causes a form of circularity that’s any kind of logical problem.

I mean, yes, from one point of view it’s ‘circular’ - the error checker expects 1 to mean ‘error’ because that’s what the function outputs, and the function outputs 1s for errors because it expects error handlers to look for 1s and treat them as errors. However this form of circularity isn’t “turtles all the way down” circularity, it’s “somebody has to pick something, we don’t care who does, and everyone else will go along with it” ‘circularity’.

It’s actually pretty analogous to when people get together to hang out and start playing the “What do you want to do?” “I don’t know, what do you want to do?” “I don’t know, what do you want to do?” game. Eventually somebody picks something and everyone goes forward, and if everyone really doesn’t care, the selection will be arbitrary - and then afterwards will be consistent for all the persons involved. The selection of interpretations to use about symbols in a system is arbitrary, but will be consistent thereafter. This allows meaning to be transferred.

If his argument held water, then the universe would implode or something when people played the “I don’t know, what do you want to do” game.

Or when two people chose to speak to one another in english as opposed to spanish.

Or when a dog owner trained its dog that the spoken phrase “sit” is an instruction for the dog to sit.

Or when he ran any computer program ever, including whichever one he posts his comments through.

Honestly, the most annoying part about his argument is the way the disproofs of it are ubiquitous and yet he hews to it so strongly. Well, that and how he seems to misunderstand half of what I say.

I don’t see why localization would be a requirement - though all the disparate parts are going to have to be in near-continuous, real-time communication with one another to keep doing their jobs as parts of the consciousness, creators of thoughts, holders of emotions, triggerers of reactions, and so on.

Well, my silly theory of the (yester)day was that consciousness was emerging all over the place and we’re not recognizing it, so yeah. Also it seems highly probably that after true machine consciousness is demonstrated there will be a cadre of theists insisting they’re all philosophical zombies because theism souls magic special. So yeah - identifying these puppies and convincing people that they shouldn’t be murdered for fun (“I have to stop playing Halo?”) is going to be a challenge.

However, I don’t think it follows that a consciousness could only be created as a distributed entity - in all cases you could theoretically take all the disparate processors and memory stores and put them all in one room and call it non-distributed. (Unless the notion is that the hardware and memory requirements are just to big to put in one place, which I find unlikely. We have some pretty big server farms.)

As for compatibility issues, if all our machine consciousnesses have come into existence via random emergence it’s less compatibility issues and more that we still don’t know how to make a machine consciousness intentionally. Uploading minds definitely requires us to know how to do it on purpose.

Beyond that, though, I find it difficult to accept that there might be aspects to the human experience that are impossible to closely replicate in a machine mind once we’ve figured out to make machine minds. Because that’s all these things are, of course - close approximations. They’re copies, not relocations of the original, and it’s not actually necessary for them to function the same ‘under the hood’ as humans do (though that’s the most brute-force way to do it). If you created a ‘The Sims’ character that had access to the full suite of memories, knowledge, opinions, and tendencies, and was also self aware and believed themself to be me, that’s basically what you’re going to get out of brain uploading. The Sim would see a whole new simulated world (full of deadly ladderless swimming pools), but they would remember being me and believe they were me.

He didn’t separate the computation from the interpreter, it’s the very basis of the definition, even wolfpup’s favorite guy Fodor says the same thing, computation is symbol manipulation independent of the interpretation or semantic properties of the symbols involved.

In other words, if you have a system that has symbols, (for example 1’s and 0’s), and some machinery that performs a sequence
of operations on them (like a turing machine), that is computation.

From the perspective of the academics that spend their time working on these ideas, “computation” is just that symbol manipulation process without any regard to what specifically us humans might consider to be the meaning.

Here’s an example:
Input: (1,0)

Steps to process the input:
If input=(0,0) then output=(1,0)
If input=(0,1) then output=(0,0)
If input=(1,0) then output=(0,0)
If input=(1,1) then output=(1,1)

Output: (0,0)

That is an example of a computation. Nowhere in that example did I explain why I wrote that computation, what the symbols represent, what is the meaning (to humans) of the computation and how it relates to anything. It’s just pure computation, devoid of interpretation or meaning (like computation is by definition).

The problem that HMHW is pointing out, is that when someone says “ya, sure, we can easily create consciousness on a computer, all we need to do is do the RIGHT computations.”

If your computer is based on symbols of 1’s and 0’s and has some memory and a processor, then that sentence is effectively saying “there are certain sequences of 1’s and 0’s in the machines memory or over time that create consciousness.”

Which results in the pretty obvious responses:
1 - Which sequences?

2 - How to do figure out which sets of 1’s and 0’s cause consciousness and which ones don’t?

3 - Given that any sequence of 1’s and 0’s could be interpreted in many different ways, are they all conscious or only the sets of 1’s and 0’s that we are currently looking at and stating to ourselves “I interpret this sequence as a brain simulation and not the simulation of a tornado”

From the standpoint of a dude who actually works with these “calculations”, you don’t do a calculation for no reason. They’re not abstract works of art; they’re implemented with a purpose. And yes indeedy, that purpose means that there is an intent to their output. An intended interpretation, if you will.

This is especially true when you talk about a calculation that’s part of a system that actually is purported to do something, like a brain or an artificial intelligence.

Sounds to me that this argument is sort of like saying “You can’t actually be sure that cars have wheels on them, because the wheels are removable as demonstrated by cars up on blocks in people’s yards. Therefore when you look at the cars driving down the highway there is no solid reason to believe that wheels are present.”

I feel it should be pointed out that “We don’t know how to do it” is not a proof of “It’s impossible to do”.

And, as I’ve obliquely hinted at once or twice, the whole reason to brute force the problem by emulating the whole frikking brain is because if we do that we don’t have to figure out how it works. Physical brains work; if you exactly duplicate their functionality with no errors or omissions then your copy will work too. The only reason your copy could possibly fail to work is if you failed to perfectly replicate some part of the functionality. In the discussion at hand that claimed missing part appears to be a theorized magical soul imparted by matter itself. For some odd reason I’m not buying that.

The thing to remember is that if one actually did have a simulated consciousness, then it’s going to be doing the interpretation of its own internal data the way it wants to. It doesn’t matter in the slightest if persons on the outside are unable to figure out how to read the data and follow what the consciousness is thinking, or even if they’re unable to determine that consciousness is going on, because the consciousness itself is carrying on and doesn’t care about outside opinions.

This is exactly equivalent to how if you were to get a printout of your computer’s memory while it was running your browser you’d be hard pressed to deduce that it even was running a browser. And yet the browser runs just fine, because while all the 1s and 0s are meaningless to you, they’re not meaningless to it - the program code is meaningful to the processor, and the stack and stored data are meaningful to the program code. The fact it’s meaningless to you doesn’t matter.

The only things that have to recognize the workings of a mind as being workings of a mind are the other workings within the same mind. Everyone else’s opinions and interpretations are utterly irrelevant.

I have some equivalent questions.

  1. Which sequences of 1s and 0s caused Watson to answer Jeopardy questions better than Ken Jennings or Brad Rutter?

  2. How to you figure out which sets of 1s and 0s produced the best answers?

  3. Given that any sequence of 1s and 0s could be interpreted in many different ways, are they all striving to produce really good Jeopardy answers, or only the sets of 1s and 0s that we are currently looking at and stating to ourselves “I interpret this sequence as a very good question-answerer and not the simulation of a tornado”

The thing is, you would never be able to locate such a sequence of 1s and 0s in the Watson hardware, and neither would anyone else. This is in part because of how massively distributed and complex it is, physically running on 2,880 POWER7 processor threads and 16 terabytes of RAM, and logically composed of a dozen or so major logical functions each comprised of thousands of distinct software components. It’s also in part because no one of those software components, nor any distinctly identifiable hardware component, is the “location” of this skill. It’s the synergistic result of all of them working together, sometimes in sequence, sometimes in massive parallelism. And this is still a very simple system compared to the brain.

The moral of the story is that qualitative changes arise from computational complexity, otherwise known as emergent properties, and those properties are neither necessarily localized nor necessarily identifiable in the lower-level components – they may exist only in the aggregate of the states and connections of the integrated system.