Downloading Your Consciousness Just Before Death.

Yeah, but that’s quite obviously not what I meant, but rather, that nothing in my argument entails that the ordinary elements of daily life—birds, trees, buildings and the like—depends on whether somebody interprets it, and how. Only that, say, that particular bird on the second wire represents ‘g’ does—as it’s an arbitrary interpretational gloss on the physical facts, just as whether a given voltage represents ‘1’ or ‘0’.

Sure. As long as you’re doing something computable, it doesn’t matter what computer you’re using (up to considerations of efficiency, and maybe complexity class, as in the difference between quantum and classical computers).

But of course, the claim in IIT is that consciousness isn’t due to computation, but rather, due to integrated information. So Turing equivalence has precisely no bearing on that. Computers may be conscious—but only if they show a sufficiently high degree of information integration. Computers as we build them today don’t.

Well, to say you’re arguing that would be something of an overstatement, but you’re certainly claiming it.

This depends on the premise that anything that’s simulated has the properties of the thing it is simulating—but that’s exactly the issue under discussion, so you can’t assume it like that. So for instance, a simulation of a brain on a ordinary present-day computer still wouldn’t have a high degree of integrated information, just as a simulation of a black hole on a present-day computer wouldn’t have a high amount of mass—unless, of course, you’re buying into the notion that it somehow has that ‘in the simulation’, whatever that may mean. But again, that’s the very issue under discussion.

OK. Let me try to put this more simply. Consider a pendulum—or two coupled pendulums, to make it a little less trivial. Then, make a movie of that. I think we’re in agreement that playing that movie, and setting the pendulums to oscillate, are two different things—the former is at best a kind of description of the latter.

Now, compress that movie, using your favorite codec. What’s typically gonna happen is that the algorithm will identify key frames, and encode successive frames as simply changes from those key frames. Thus, when you play back the movie now, what happens is that later frames will be computed, using certain rules, from earlier ones.

Let’s take this a step further. Use, in order to compress that movie further, some general-purpose machine learning suite like eureqa. This will increase the compression, essentially by figuring out the laws according to which the system operates. Fundamentally, this is the same thing the earlier compression algorithm did: the ‘key frame’ is what physicists call an ‘initial condition’, and the movie will be generated from that using ‘laws of physics’—essentially, best-fit formula to the data points given by the pendulum.

The latter will be, essentially, a simulation of the pendulum. Now, you claim that simulations somehow call into being, by some magic not further described, the thing being simulated—which is rather like a map, once drawn, calling into being the land it shows, but no matter. So in the continuum from movie over successively compressed versions to a full-on simulation, when does that happen? Does the simulated pendulum spring into existence fully formed once a certain threshold is exceeded, or does it gradually gain existence the higher the fidelity of the simulation becomes?

And what about the analogous case of a movie of he activity of a human brain? When does the movie become a simulation—when does anything in it start to feel?

Quite the opposite—my argument is based on f, f’, and everything else you could claim that the box computes being computations in exactly the same sense. Of course, I would not consider the mere physical evolution of a system a computation, on account of it making the notion of computation trivial and robbing it of any meaning whatsoever, but if that’s what you feel forced to accept to defend your position, then I guess we each must choose what hill to die on.

Jeez, again? Could you not, like, just bookmark that post? But fine, here it is once more:

The answer is, of course, that what the interpreter does is not just computation, and hence, every attempt you make to reduce it to computation—to computationally ‘interpret’ the box’s outputs—is just going to fail. That’s exactly the regress issue I’m highlighting here.

I mean, really, the crucial point is that if f can be computed in the same sense as you use ‘computation’ when you speak about what my box computes (which is really just its physical evolution—but no matter), then you ought to be able to produce a system that computes f in just the same way. You can’t, though; and you can either explicitly acknowledge that, or implicitly, by continually failing to address the point. Either way is fine with me, really.

Interpretation is what connects symbols with their meaning. If I read ‘gift’ as meaning ‘poison’, I interpret it one way; if you read it as meaning ‘present’, you interpret it another. The computation that a system performs is defined by what the symbolic vehicles it manipulates mean; thus, what computation is being performed depends on how it is interpreted.

It didn’t, though. It flashed a light, if some other light flashes. The latter is only an interpretation of the former if it is suitably interpreted in turn—‘light on’ doesn’t mean ‘odd’, except if it is interpreted as meaning ‘odd’; it could as well mean ‘even’, if we interpret the box’s lights such that ‘on’ means ‘0’ and ‘off’ means ‘1’, thus making e. g. (on, off, on) into ‘2’, hence even.

But, to summarize: what you call ‘computation’ is nothing but the physical evolution of a system. On that definition, any system computes, and what it computes is just the sequence of states it traverses. You claim that computation is all there is, and thus, there should be a system that computes my f in just that sense, and which can’t be taken to compute anything else. So the work you have to do to demonstrate your claim is clear: show a system that implements that particular computation.

Your argument also entails the notion that the box or something about the box somehow cares about the “arbitrary interpretational gloss” of the observer, which is silly. The box doesn’t care about that any more than the bird does, and the simulation inside the box doesn’t care about that either.

One large error that your argument entails, really, is that you seem to think that the outside interpreter matters. It doesn’t. What matters is internal interpretation, which is not subject to interpretational wishy-washyness because the interpretation the system uses regarding its internal data is fixed and inherent within the system.

If a simulation is simulating cars being smashed into walls, an observer blithely declaring that the data is a cake recipe isn’t going to stop the data simulating the cars from being permuted in a way that the system interprets as collision damage.

Based on my readings of it, you can simulate a quantum computer on a conventional one. Shocking!

…It would just require way more memory and be slow as dirt, which defeats the entire point. Quantum computers are faster, but (as best I can tell) they are not capable of doing anything that a conventional computer can’t…supposing the conventional computer has all the time and memory it wants.

It’s one thing to say that we probably won’t ever assemble a conventional computer fast and vast enough to simulate the Matrix. It’s quite another to say it can’t be done even with a theoretically infinitely vast and fast computer. The first statement is possibly true (and possibly false). The second statement is wrong, and presumes that magic happens when you arrange neurons or atoms in a particular way.

Dude, you’re on vacation. Calm down. De-stress. Don’t embarrass yourself with desperate-sounding ad-hominems.

Actually it depends on people having a damn clue what simulations do, what cognition is, and the fact that the only thing that has to recognize that the cognition is a cognition is the cognition itself. (Cogito ergo sum, and all that.)

I’ll first note that you can’t do this with the average movie because the cuts alone would result in discontuities in your laws of physics that would make the movie uncompressable via the simulation approach you describe. You probably realize this, thus you swap in the pendulum movie instead.

You probably also realize that both the average movie and your pendulum example include insufficient information to “fully form” anything, because they only show one side of the thing at a time. If the camera is fixed you’ll only develop rules that spawn one side of them. If it’s moving you might get a full 3-D model of them - or you might get something wholly different that only looks like them from one angle (the angle of the original camera).

But even putting all this aside and presuming that the analysis program miraculously manages to infer the pendulum and laws of existence from the video clip, then it becomes a simulation when you use a simulator to run that state and those laws back. (You can call it an uncompressor if you like, but whatever.)

As for your “springing fully formed”, the pendulum will be a pendulum in the simulation at such a point where the “compressor” deduces sufficient state and laws of physics to recreate the behavior of the original pendulum in the simulation. It’s worth noting that this is not a point on a continuum - there are myriad other ways to recreate the images, but only when the miracle occurs and you’re emulating the actual real-world mass and physics will your emulator be generating all the same emergent properties as the original. Emulations that are emulating, say, a sheet of points changing color based on various rules to recreate the original images will not have the same side effects, and there’s not a continuum from one emulation to the other.

(And of course the simulated pendulum never springs out of the simulation into the real world, because that’s not how simulations work.)

Filming a brain with your maximum-resolution x-ray 3-d camera, are you? And your “compressor” is magically creating an emulation of the real world physics and interactions, rather than some other unrelated thing that produces the same images?

When that happens, then you can be pretty sure it’s feeling. Because congratulations, you’ve made a working simulation of the brain.

Not the opposite because that’s not what I said. Your argument relies on there being a separation between the calculation and the external interpretation of said calculation - and the external interpretation being both variable and that variable external interpretation somehow mattering.

No, that can’t be it, because by that definition the box does calculate f. And f’. Simultaneously.

And of course if the box’s internal workings are relying on it calculating f, then the fact it also calculates f’ is an irrelevant curiosity of no import.

Of course the box computes f, you said so yourself. And you can claim that the interpretation is magic, but it’s an absurd claim worthy only of dismissal.

And, again, computational processes interpret the output of other computational processes all the time.

Yep. This is not a problem or computational systems. They interpret symbols all the time.

Actually it stored the value, not flashed a light, but that’s neither here nor there. It does do an interpretation, and your baseless naysaying isn’t impressive. The fact that it uses a mechanism to do its interpretations doesn’t make them not interpretations, any more than the fact that you have mechanisms in your brains doesn’t refute the idea you interpret things.

Your baseless naysaying is exactly equivalent to a solipsist denying that the world exists, despite it demonstrating it does by its presence.

Bolding mine: this statement is stupid. The fact that to a layman your brain looks like an inert grey lump doesn’t mean that’s the only thing it exists as. The fact that a doctor can look at a chart of your brain activity and fail to interpret any part of it as proving consciousness doesn’t prove you’re a philosophical zombie.

So, ignoring that bit of rank stupidity, I have shown a system that implements a particular computation, and so have you. The fact that the boxes in question can also be interpreted as paperweights and doorstops have no impact on their ability to calculate f.

Okay, because it’s fun to do, I’ve come up with another way to approach this debate.

I’m conceding!

I concede that HMHW’s argument does indeed prove that the mechanisms within a box with lights and switches on it cannot cause or sustain consciousness or self awareness because observers can flip switches on the box and observe the box and come to different conclusions about the meaning of the lights on the box, and that thanks to all that there’s some kind of regression or infinite loop or whatever that proves that the mechanism within the box cannot cause or sustain consciousness, no matter what the mechanism inside the box is.

By the way, here’s a possible mechanism for the inside of the box.
Every switch on the outside surface of the box directly controls a single light on the inside surface of the box.
Every light on the outside of the box is directly controlled by a single switch on the inside surface of the box.
There’s a human being inside the box. (It’s a big box.) When she sees a sequence of lights come on on the inside surface of the box, she consults her human brain and decides which switches to flip on the inside of the box. This lights up the lights on the outside of the box. It doesn’t really matter what the intent of the inside human is - she could be thinking about f, f’, some other function, a simple lookup table, or even just randomly flipping the switches in a way that just coincidentally happens to cause the specified result. In any case, a human brain is the mechanism inside the box that determines which lights on the outside of the box are lit.

Because I have conceded and accepted HMHW’s argument, I now accept that it has been proven that the human brain cannot cause or sustain consciousness.

Honestly I didn’t know that human brains were incapable of that, and it really shapes my outlook on how I should interact with humans in the future. It certainly opens up a number of interesting possibilities…

In any case, thank you all for your time and have a great Sunday.

OK, I’d really appreciate if you could tell me how you got that from my argument, because there is just nothing about that that’s right, or even anywhere close to anything I’ve said. So it seems I must’ve expressed myself more ambiguously than I thought I had; if you could point out where, I’d be grateful.

Anyway, as you also seem to realize later on when you argue that the box, in fact, computes all the functions one could interpret it to compute, this is of course not the case: it’s actually the whole point of the argument that nothing in the box cares about how it is interpreted, that it’s physically undisturbed, and still able to be interpreted in all these different ways. Hence, its physical evolution does not fix a unique computational interpretation.

Except, of course, there’s no such thing as ‘internal’ interpretation. A light coming on in response to other lights doing so does not interpret those lights; it’s merely a further link in the chain of physical causality. It’s only once that further link gets interpreted—in whatever way—that one could talk, sloppily, about that light ‘interpreting’ the others.

The system doesn’t interpret anything; it gives out numbers, or graphics, then to be interpreted by the user of that system. These numbers/graphics/whatever do not carry an intrinsic meaning any more than the lights of my box does.

Of course you can, but, at least as far as is currently conjectured, quantum computers systematically outperform classical ones at least for certain problems. That’s what I meant by a difference in complexity class.

No. It merely presumes that not everything about physical systems is simulable—which, simulations just being a kind of generalized description, simply coincides with the claim that a description of a thing doesn’t have all the properties of that thing itself (even though everything may be describable). In other words: map != territory.

Thanks for your concern, but you got this the wrong way around—this is what I do to de-stress! Hence, I’m here less while on vacation.

As for ad hominems, I just mean stuff like the prior passage I quoted—you blithely assert that it’s wrong that you couldn’t simulate the matrix, without even a pretend argument.

For the pendulum, at least, the observation of a single side will be enough to generate all the data needed. Even in more general cases, modern neural network approaches can guess at the 3D structure given a 2D image with high reliability.

I call it ‘codec’, because it’s just that bit of code that decompresses my movie for viewing.

Well, but that’s the question—how much is sufficient? What happens in the intermediate cases? Does a bit closer to the real deal decide between its simulated existence and a mere movie? Is there a bound on the Hamming distance between the two?

You defined computation as follows:

That is, exactly, just the physics of the system. Just the sequence of states the system traverses.

No. By that definition—just read it!—the box computes what it is used to compute. Thus, when I use it to compute f, it computes f, and not f’; when I use it to compute f’, it computes f’, and not f.

In what way could the ‘internal workings’ rely on it calculating f? Can you give an example, or an argument, here?

The only magic in this discussion is the claim that simulating something—manipulating symbols—conjures it into being. This is, in fact, quite literally magical thinking: the belief that the right phrase, uttered in the right sort of way, has direct bearing on reality.

Well then, it’s easy! Just give an example where there is no further interpretation of the interpreting mechanism necessary. I’ll wait!

Just repeating something with emphasis doesn’t make it a better argument in any way. Without interpretation that a light, or stored bit, or whatever, means ‘even’ there is no sense in which your modification interpreted the lights as showing a number that’s even.

That’s entirely beside the point. The point is that if there’s no unique computation that can be associated to a system, then there also isn’t a unique mind—if mind is computation. So, in order for computationalism to have any plausibility, physical systems need to uniquely implement certain computations. There needs to be a fact of the matter that a given computation C[sub]M[/sub] gives rise to a mind M, and is performed by a physical system S. If S at the same time gives rise to different computations, there will in general likewise be different minds (anything else would require basically a cosmic conspiracy), but I don’t think you’d want to claim that your brain gives rise to other minds than yours.

Furthermore, this sort of move away from your earlier definition of computation—where it was just the deterministic operations inside the box—to a kind of computational pluralism where the box computes anything it can be taken to compute doesn’t actually help you: I still use the box to compute f, singling out one of the possibilities. If I (that is, my mind) am computational, then it should be possible to single f out computationally. Hence, there should be a physical system computing f—and only that. Consequently, my challenge still stands unmet.

Are you actually serious? Do you really believe that my argument would entail that the human in the box wouldn’t be conscious? Because if you do, then you still haven’t got the first clue what it is I’m actually talking about.

The box, with the human inside, could still be taken to implement all the computations the original box did. This has no implications whatsoever on the consciousness of the human inside it: after all, consciousness isn’t computational, and thus, all my interpreting does fuck all for it! It would only be a problem if I held that consciousness were computational, and furthermore, that which computation a system is taken to implement at any given time is the ‘objectively right’ computation to associate with that system. Neither of which I do think is the case.

This is the sticking point. You seem to be claiming that a computer carries out a vast number of alternative computations at the same time as the single one I am interested in, and nobody can tell, the difference. But when I type FISH, then FISH appears on the screen. What is this? Some bizarre cosmic coincidence?

Or is it, perchance, that the program that causes FISH to appear is privileged, because it is the function that the computer and its program were designed to do. Just as you mind/brain body system was designed (by natural evolution) to be conscious. The other untold millions of program possibilities you describe may very well occur, but they do not affect the output, which still says ‘FISH’. They are associated, to use your term. Do you deny this?

If you can’t answer that (and don’t say that you have, because you have not) then I too give up. I think you are using a definition of computation specifically chosen to exclude the possibility that consciousness is computational. Well done. I am more interested in the type of processing that does create a mind, not the closely restricted type of processing that you appear to talking about, which (according to you) cannot.

It’s not a continuum, so these questions don’t make sense. I’m reminded of those simple puzzle you see in computer games sometimes, where you’re supposed to rotate blocks until the lines running across the blocks form a contiguous line from one point to another. There is no “almost right” that “almost works”. It either works or it doesn’t. Your system is either engaging in behaviors that generate consciousness, or it’s not.

And here we highlight how your argument is entirely and completely dependent on the absurd and fallacious misuse/abuse of the term “compute”.

What the box does doesn’t depend on the observer, and the notion that the box ceases to compute f when the observer elects to walk away and go to the bathroom is ridiculous. The computational process in the box is doing the same thing no matter who is watching. The observer observes whatever he wants to and interprets it however he wants to, but that doesn’t change the box. Calling the interpretation “computation” is nonsense - and it’s nonsense that the whole of your argument is dependent on.

You do realize that the average computer has more than one component in it, right? More than one chip, more than one board? Video cards are slotted into motherboards, and you can yank one video card and put in another? Heard of that? How about the communication protocols that the computer I’m typing on is expected to comply with in order for the internet routers to be able to communicate to the SDMB servers, which interpret the messages it gets in a way that tells it to store posts, after which it interprets the signals from your device to determine which posts to send to your device, which then interprets what it gets from the server in order to show it in your browser? Do you believe these things are happening?

Computational processes interface with other computational processes literally all the time, and these interactions all rely on the other calculation doing things the way the one calculation expects them to, and interprets its outputs in a way that assumes the other calculation is doing what it’s supposed to. Sure, you’re pretending this doesn’t qualify as “interpretation” because your silly argument relies on pretending this doesn’t happen. But the fact we’re communicating right now is utterly reliant upon the fact that it does.

You keep claiming that the box needs to “uniquely” implement the so-called computation, but that’s unfounded and stupid. Let’s go wild and pretend for a moment that physical brains are conscious. But, but, but they can’t be conscious! Brains are actually devices that turn oxygenated blood into less-oxygenated blood! That’s what they do! Which means they can’t do anything else!

Sorry, no, that’s nonsense. If a system is carrying out the activity of generating a consciousness, then the fact it’s doing something else too doesn’t change the fact that it’s generating a consciousness. It’s not a problem.

But you do raise an interesting point about multiple consciousnesses being generated simultaneously if the operations within the system happen to be generating two or more distinct consciousness computations simultaneously. I can actually give one or two real-life examples of this happening!

Firstly…ever heard of the subconscious? Are you certain the subconscious isn’t conscious from its own perspective? It seems to handle things like sleepwalking and such pretty handily for something that’s mindless…maybe it’s not mindless.

But okay, sure, you’re going to deny that the subconscious is conscious and I certainly can’t prove otherwise. But how about this:

Consider the system of particle interactions called “a room with two people in it”. This physical system is a complex sea of particles all interacting with one another. And this sea of particle interactions is generating -gasp!- two consciousnesses! One sea of particle interactions, two consciousnesses!

Amazing, I know.

What do you imagine that your silly box argument proves, then? What does it say about the contents of the box? Anything? Nothing? Whatever you wish it to say at any given moment and then the opposite thing the next moment?

No. My claim is that there is no unique computation objectively associated with a given physical system, but that a system can be interpreted as performing distinct computations. Furthermore, whenever we think of a system as computing any function, then we are silently invoking a specific interpretation of that system.

First of all, I wouldn’t really call that a computation—it’s not different from you writing ‘FISH’ on a piece of paper, and you wouldn’t typically regard yourself as having computed anything by doing so. But that’s not the issue.

The point is that all you’ve done is produced a certain set of symbols (F, I, S, H). This is just what the system physically does; it’s its reaction to certain causal stimuli (you pushing buttons). It’s, to recall my example, just the lamps lighting up in response to switches being flipped.

The computation, however, is in what we take these symbols to mean—what they signify. We don’t comput lamps lighting up, we compute, say, sums; to do so, the lamps have to be interpreted as signifying numbers. Now, all I’ve done is point out that this is, in general, not uniquely possible: for a system you take to compute sums, I can construct another interpretation such that it computes some other function, which stands on just the same footing as yours. So, there is no objective fact of the matter regarding which one is what the system really computes.

To take your example, every ‘computation’ associated with the system will produce the symbols ‘FISH’, but we’re not interested in the symbols, but in what they mean—to somebody fluent in English, they might mean an aquatic animal, or the act of catching them; in another language, it might mean something else, and to somebody not speaking any of these languages, it will just be a meaningless string of symbols.

The same thing happens with the supposed computations associated with a system. You might think that ‘light on’ means ‘1’, and ‘light off’ means ‘0’, but in another language—on another interpretation—this need not be the case. So if the system computes sums in your ‘language’, it computes something else entirely in another.

There is a favoured interpretation, though- the one which was incorporated into its design. All the other interpretations are garbage. Take the Voynich manuscript, for instance- I’ve read several bullshit interpretation of that text, but there is a favoured interpretation out there somewhere, namely the interpretation that the author intended. Even if that meaning is in fact just meaningless drivel, or even if we never decode it, there is an intended solution.

I can see that this problem of interpretation of symbols will make the decoding of ‘mentalese’ inside the brain a mind-bogglingly-vast problem, but in no way does it disprove the computation hypothesis. At the end of the day, the mind/brain/body system does manufacture a (mostly) coherent set of behaviours, rather than a series of random actions; this shows that there is a favoured interpretation, even if we never decode it.

It’s probably worth noting that if one defines computation as “there’s only one possible interpretation of the output and no other interpretations are even theoretically possible”, then computation is impossible. It never happens and cannot happen. There is nothing that cannot be interpreted multiple ways. “That means nothing to me” is an interpretation, after all.

So I handily accept that using that specific definition, computation cannot create consciousness – because “computation” can’t happen at all. However this conclusion says nothing about whether computers can create consciousnesses in simulation, because computers don’t do that specific sort of “computation” - nothing does that specific sort of “computation”.

Great. So when does it? What’s the criteria? When does the simulation become simulation-y enough to count, to magically conjure up real stuff?

Then what happens when I use the box to compute f—to compute the sum of two inputs? What happens in that case that doesn’t happen when I use it to compute f’? How do I choose between these two interpretations? Is that choice computational? If so, what would a computer look like that only computes f?

After all, I, using the box, can exclusively compute f. When I do that, I compute the sum of two inputs; no consideration of f’ enters into it, and there’s a definite nature to what is being computed. If all that I’m doing is computation, as well, then there ought to be a definite way for some box to compute f, too.

Again, this lands you in a double bind. Either, what I’m doing using the box isn’t computation—then, there’s something that a mind can do that doesn’t boil down to computation, and computationalism is wrong. Or, what I’m doing using the box is computation—then, there ought to be a computer that only computes f, since I, using the box, only computer f.

Of course. None of that has any bearing on my argument, however. Adding more maps to the original map doesn’t make the whole thing any less of a map—and any less dependent on interpretation.

I’ve posed you a simple question, on which you however have nothing to offer but variously either calling things you don’t understand ‘silly’ or throwing up rhetorical chaff like the above. So can you, or can’t you actually demonstrate a process that interprets, say, my box’s lights, without itself needing interpretation?

Of course, your waffling makes it obvious that you can’t. I’m just wondering whether you’ll eventually cop to that, or are just trying to drag this thing out till I lose interest.

All that a further component reacts to is a voltage level. What that voltage level means, and what, consequently, is being computed, never enters into it—until, of course, you have somebody look at it and consider it, say, a sum of two inputs; or not, as the case may be.

No. The fact that we’re communicating—to the extent that we are—relies on the shared interpretation we imbue upon the symbols that the machines in front of us manipulate. The machines themselves have no knowledge of, interest in, or indeed, way of getting at this meaning; all they’re concerned with is voltage levels and what they cause them to do.

I’ve never claimed that uniqueness means to the exclusion of anything else. Think about other objective aspects of a physical system, such as its mass: a system has mass m means that it uniquely has mass m, but that doesn’t mean that it can’t also have charge c. But if it one can take it to have mass m[sub]1[/sub] as well as m[sub]2[/sub] and m[sub]3[/sub], then mass is just not an objective property of the system. It’s the same with computation.

At best, that’s a case of two parts of a system implementing two different minds—which is not problematic at all (obviously). In that case, distinct computations supervene upon distinct degrees of freedom.

The problem arises if distinct computations are connected to the same degrees of freedom. Then, like mass, above, computation simply fails to be an objective property of the system—different observers can validly disagree about what is being computed. Taken to the level of minds, that then means that different observers could validly disagree about what kind of mind is associated with a physical system, or whether any kind of mind is associated to it at all.

Alternatively, you might want to claim that simply all of the possible interpretations apply equally (although then, you still need to explain how it is possible to single one out, computationally). But this runs into the ‘Hinckfuss’ pail’-problem: a single bucket of water in the sun contains interactions as complex as anything a brain produces; there will be interpretations according to which it gives rise to the computation that produces a mind, as well as virtually any other computation. But then, the vast majority of minds will, in fact, be daydreaming puddles of water, rather than the sorts of beings they take themselves to be. This, to most, is an unacceptable consequence; although if you decide you can live with that, then yes, computationalism might be an option.

It says the same thing I’ve claimed from the start of this: that computation is not an objective property of a physical system, and depends on interpretation, which means it cannot act as a foundation for mind (and its interpretational faculties, in particular).

But how does ‘intention’ suffice to fix, say, the computation that produces your mind? Whose intention, at any rate? That of evolution? Evolution doesn’t see the symbolic contents of minds—it merely acts at the level of causes and effects, of button-pushes and lamp-lights.

And do you really want to claim that, if a system both supports an interpretation according to which it computes a mind, and one according to which it fails to do so, whether the system possesses a mind depends on the intentions of whoever put it together? How, actually, is that intention supposed to make its power felt? Computationally? Then, we just have the same problem once more. In any other way? Then, computationalism is already wrong.

Sure. Objective properties can’t be interpreted any other way. Mass, for instance. You can’t interpret something as having mass m[sub]1[/sub] when it in fact has mass m[sub]2[/sub]; if you claim the former, you are quite simply in error.

The claim of computationalism is that the computation a system performs is an objective fact about the system in just the same way—that’s the only sensible way to read it, as otherwise, whether the system has a mind or not, and what mind, would not be an objective property of it. You can be mistaken about whether a system has a mind, but it either does, or doesn’t. To make that possible, on computationalism, the system either must implement that computation, or not: there must be an objective fact of the matter. But there’s not, as it seems you’re beginning to realize. So computationalism is just wrong; chuck it into the bin of mistaken approaches to consciousness, and start anew.

Furthermore, I can use the box to compute sums. Uniquely. I’m not doing anything else while I compute sums. That’s just the same claim as saying that when you use your calculator to calculate something, you aren’t using it in any other way. If computationalism is right, then it must be able to explain this uniqueness. If computation is inherently relative, as you now seem to claim (which is, by the way, a far cry from it being the deterministic operations inside the box, which are completely, well, determinate and objective, and not subject to interpretation in the slightest), then it does not have the resources to do so.

The fact that you have no goddamn clue what simulations are or how they work is not my problem.

“Conjure up real stuff”. Good god.

(Suffice to say, “cognition” is not a “stuff” anyway.)

And as has been fucking noted dozens of fucking times, it’s extremely standard for computer components to interpret the output of other computer components in specific ways. This is how computers work. You are not doing a damn thing that computers don’t do.

Literally every computer on the planet interprets the output of some other part of itself in a specific way. There are trillions of calculating devices that are as exclusive as you are regarding interpretation aka “computation”.

I’m in no bind. You haven’t shown anything special about the mind here - the interpretation/computation you’re talking about is something computers do all the time.

All that’s happening in your head is voltage levels, and other assorted low-level physical phenomena. If being based in physics disqualifies the box’s interpretations from being interpretations, then you have disqualified your brain’s interpretations too.

Unless you’re citing magic, which of course you are, but I’m wondering whether you’ll eventually cop to that, or are just trying to drag this thing out till I lose interest.

I actually don’t have to accept that buckets of water can sustain consciousness, because consciousness is an ongoing process and there’s no reason to believe that the water in the bucket will maintain any ongoing process that could sustain consciousness without randomly deviating from the consciousness process basically instantly.

As for singling the consciousness “interpretation” out individually, the entire deal with consciousness is that I don’t have to - if consciousness is being generated it will be aware of it itself, because that’s what self-awareness is all about. If there are other, non-conscious ways to interpret the process those other ways don’t matter, because they won’t be self-aware.

The spinning wheels of a car aren’t an objective property of the car, but the fact that they move the car forward is not dependent on interpretation. Mechanical processes can have consequences that are not dependent upon outside oberservation.

Of course, this does apply specifically to things that are doing something. But that’s okay, because consciousness is behavior, not a static particle or whatever.

The interpretation that matters is the interpretation done by the self-aware mind itself, obviously. You do realize that self-aware minds are aware of themselves, right?

I strongly disagree that deterministic things can’t do interpretations, because I strongly believe that human minds are deterministic in every meaningful way. (I know what uncontrolled random perturbations do to operating processes. It’s not pretty, and nothing about human behavior, decision-making, or ‘interpretation’ resembles it.)

And honestly, the better analogue to consciousness is not whichever random interpretation you’re making of the illuminated lights, but rather which lights are illuminated. The lights are of course not inherently illuminated; whether they’re illuminated or not depends on the momentary state of the inside of the box. And if the innards of the box end up lighting the light or not, that’s what’s happening regardless of outside interpretation; if a car is spinning is wheels and pushing itself down a road then where it ends up is not a matter of outside interpretation; if a computer or physical brain is running a self-aware process, then that’s happening regardless of outside interpretation.

Well, it’s what your beliefs entail. Let’s recap.

A movie of my brain activity—say, in full 3D, down to whatever resolution you deem necessary—while I’m being kicked in the shin won’t create any feeling of pain anywhere upon being played back. A simulation, which reproduces that movie by computing later brain states from earlier ones, however, will.

Now, the latter can be obtained from the former by mere compression: finding patterns in the data, and using them in order to minimize hard disk space at the expense of having to do some computation. Consequently, the simulation is a (one might conjecture, maximally) compressed version of the movie. When one plays back the movie from the compressed version, the computation that takes place is exactly the same as what takes place if one simulates my brain as I’m being kicked in the shin.

But, between the two—the movie that doesn’t lead to any pain being felt, and the simulation that does—there exist various intermediate stages, of the movie being less than maximally compressed. Hence, my question: when, in these intermediate stages, does the feeling of pain arise? And what makes it arise?

Basically, I think this sort of thing is just an artifact of granting a computation properties it simply doesn’t have (and which we have no reason at all to suppose it should have). It would be much more consistent to simply view the movie and the simulation (complete with a given initial condition) as fundamentally the same kind of thing, namely, a description—a highly efficient, economical one in the case of the simulation, but a description nonetheless. And just as me describing somebody’s being in pain does not actually produce any pain being felt anywhere in the world, so neither does the simulation. Descriptions have no claim to having all the same properties of the things they describe; when I describe a planet circling its star, that doesn’t mean that the so-described planet feels the effect of any mass—it circles because I say it does, not because of Newton’s law of gravitation. It’s the same with a simulation.

Of course, there are rules that the simulation follows—rules on what can be said, given what has been said, so to speak. That’s just how it can come to be so efficient, so very highly compressed. But those rules don’t manifest in any simulated reality as its laws of physics, or what have you; they simply govern what is being described, and under which circumstances.

So there’s no reason at all to suppose that a simulated brain should be conscious. To claim it is is exactly to claim that the right sort of signs, manipulated in the right way—the right sort of spell spoken with true conviction—produces something that possesses an independent reality. But no: those signs are just placeholders for whatever meaning we fill them with.

I have a feeling you’re having trouble with how this ‘debate’ thing works. Usually, you don’t simply state your opinion, and stomp your foot huffily for emphasis, so as to make damn sure that it’s clear that this is how it is, period, but rather, you’re expected to support your position by argument—which I’ve invited you again and again to do, and you simply refuse.

But, for clarity, and because I’m just such a damn affable chap, here’s again why the above is simply false: no computer ever has done anything because a bit in its memory register meant 1 or 0; whatever a computer does is dictated by whether it holds a charge, or not (or however else it is physically realized). A graphics card, to use your example, does not interpret the voltage pattern it is send, and then produce intelligible output; the voltage pattern simply causes it to make certain pixels light up in a certain way. Only upon the pixels themselves being interpreted can we talk about the meaning of the bits in the register.

I have introduced a simple example to illustrate this sort of thing, as one quickly gets confused considering complicated systems such as real-world computers. You’re as always cordially invited to make use of that example system in your future arguments.

But, to make the point more concrete, consider the box’s ‘graphics card’: it’s whatever device is used to translate a voltage level into a lamp light—in the simplest case, a wire, but we could use a more complicated setup of a volt meter, relais, and battery, perhaps. This takes a voltage level, and sets a light in response; this is exactly what a graphics card does: take some voltage levels, and produce pixel patterns in response.

Of course, there is no interpretation at all that’s happening in this case. It’s simply a causal connection: voltage-high —> lamp-on. Only upon interpreting the lamp’s light as, say, ‘1’, and perhaps the 2[sup]1[/sup]-bit of a binary number, does the voltage level likewise acquire an interpretation. Otherwise, no interpretation happens.

This shows that you still fundamentally misunderstand the argument I’m making. This would be true if I regarded the mind as being a computation; but I don’t. It’s just what the brain physically does, like the colon does digestion. The colon doesn’t compute the food, it digests it; the brain doesn’t compute a mind, it produces it.

That would still mean that it is conscious in this given instant (after all, you’re conscious right now, even if that ‘ongoing process’ just stops due to somebody thinking that shiny red button sends a tweet). And furthermore, the next instant of your conscious experience would still much more likely be one of a pail of water, or the plasma swirls in a star, or whatever else.

But without the interpretation, there just is no consciousness to self-interprete. The next level of the regress must be completed before the prior one—that’s what makes the regress vicious. We’ve arrived again at the omnipotent being magicking itself into existence.

The fact that its wheels are spinning relatively to the ground, which is what creates the motive force, is very much an objective property of the car. There’s nothing that says objective properties can’t be relational—Alice’s being taller than Bob doesn’t preclude her from being smaller than Charlie, and both are perfectly objective matters of fact (that is, it would be wrong to say ‘Alice is taller than Charlie’).

Of course, having conscious experience be a relational property is going to introduce a whole new host of issues—such as a conscious entity only being conscious relative to certain other sets of affairs, and thus, by changing those, one could change whether the entity is conscious without changing its physical state. So I have my doubts that relational facts can support conscious experience, but you’re welcome to try and make an argument to that effect.

Well, without even going into the whole rat’s nest of issues that identifying consciousness with behavior is going to add to the matter (see: the downfall of behaviorism), this is at least problematic. I could very well be sat statically before a blue wall, experiencing nothing but that blueness, and still be conscious—without any processes thereby occurring.

The problem is that there is no self-aware mind, on computationalism, without the right interpretation; so if that mind itself were to supply that interpretation, then we have just the situation of the omnipotent being magicking itself into existence.

That’s not what I have said. I merely pointed out that you seem to have shifted your position from computation being the deterministic processes going on inside the box to it being interpretation-relative. That is, the deterministic processes themselves are not a matter of interpretation.

As for the claim that the human mind is deterministic, I think that’s unlikely: randomness, in communication theory, is a very valuable resource, and certain computations depend on random perturbations (like genetic algorithms). It’s very likely that the brain uses the thermodynamic fluctuations it’s going to experience anyway to its advantage—perhaps in a similar way to how the copying of genetic algorithms uses it as a ‘ratchet’ to facilitate the process.

Possibly. But that’s of course not computationalism, then, which is what I’m arguing against, but rather, some form of identity theory (which is problematic for other reasons).

This is simply false though. The behavior of a system doesn’t have to be “interpreted” to be the behavior of a system.

If we have a computer controlling the gates on a damn to regulate the water level that’s a particular behavior that the voltage is turning into. Human behavior is the same thing.

Okay, first I reject the notion that you can in any realistic way compress a movie into an accurate simulation. That’s silly because the compression would have to generate data it doesn’t have based on information not available to it. Basically it’d be like when a TV show cop says “Enhance the image!” and four pixels become a detailed license plate. Yes you could have a “compressor” that randomly generates extra details and maybe comes up with the same rules that reality does, but the odds of that are about the same as they are of randomly adding pixels and getting the right license plate.
And secondly, simulations aren’t mere descriptions; they generate additional results from their rules. That’s pretty much the whole point. And they way they accomplish this is by genuinely recreating the actions and interactions within the simulated worlds.

I honestly despair of ever getting you to accept this, but this is how it works. Consider that simulation of the solar system that simulates mass and gravity and how mass and gravity cause the planets in the simulation to interact. You will say that the planets don’t have mass or generate gravity. I will say that that planets do have mass - it’s a number stored in a variable. And the simulated universe does have gravity; it’s a formula that plugs in the masses of things and changes their movement vector.

In the simulation the simulated planets are drawn together and fall into orbits just like the real things do. You call this a description of reality - but that’s utterly inaccurate, because nothing in the simulation depends on reality. You can throw an extra planet into the simulation and the simulated planets will react, deviating from the behavior of the real planets. The simulation describes nothing - the objects in the simulation are moving independently, based on gravity acting on their mass. You can wrap scare quotes around “gravity” and “mass” if you want due to them being implemented differently from how reality implements them, but within the simulation they exist and act on things and react to things just like things in the real world do. You can call the gravity rule just a description, but it has actual effects, which would play out differently if you changed the rule.

You can assert your rejection of this, but I won’t care - I know how simulations work, and you’re not likely to convince me otherwise with ignorant and inaccurate analogies to movies and compression and such.

Okay, see, I actually USE computers, and WRITE software, and I know for a rock-solid fact that when these things are created they are positively brimming with specific meaning. When a video card slot is designed, the specific meaning and purpose of each pin is specified with extreme clarity in a document called a “specification”. Sure, these intentions and interpretations are implemented using physical matter arranged in some way that manipulates electrons, but that doesn’t change the fact that this manipulation is definitely done with intent or the fact that specific interpretations of the electrical signals coming through the pins are definitely occurring.

You choose to deliberately ignore this obvious and well-established fact, of course, because it destroys your argument.

I understand quite clearly that your argument is an example of special pleading where the physical processes in the human brain are not subject to the same rules as the physical processes within computers. You’re wrapping up the operations within the computer with the label “computation” (and then instantly destroy the meaning of that label by including the brain’s interpretive operations within the definition), but the use of this term is utterly fallacious sophistry because you have done absolutely nothing to demonstrate that the human brain’s operations are qualitatively different than the operations within the computer. They are different, but they are still cascading causal interactions of a largely electrical nature. The brain doesn’t “just physically” do jack shit - and the belief that it does has horrific implications, because if consciousness was magically inherent in some bit of brain matter it would persist after death unto perpetuity as the body rotted around it. Is that really what you’re going for? Because it’s an inevitable side effect of self-awareness being an inherent property of the appendix’s physical matter. (Or wherever you think that magical matter is seated; I don’t know.)

You don’t regard the mind as a computation, but I do. It’s the side effect of electrons and other particles moving around according to specific rules, guided by how the electrons interact with how the the physical matter in the brain is arranged. Just exactly like how computer behavior is the side effect of electrons being routed around by the physical matter of the computer, and how digestion is just the process of the food particles being routed around by the physical matter in the colon. It’s all the same - the only difference is the different ways things are being moved around.

They’re all just physical processes moving matter and electricity around. Calling some of that movement “computation” and some not is just labeling, and your choice to not label the brain’s operation as calculation is pretty damned arbitrary since you don’t know how it works. And then of course using your unfounded definition alongside a fallacious argument based entirely on special pleading is really sending you down the flusher.

Regarding the first sentence: Given that I think that consciousness fairly evidently requires an ongoing execution loop, not really. But even if it was true, so what? I’m extremely willing to posit that literally every single windows program ever has generated one, several, or perhaps thousands or millions of consciousnesses, all of which were summarily terminated not long afterward. I’m not sure how a few more consciousness being created and instantly wiped out in water buckets is supposed to bother me.

On the other hand, regarding the second sentence - this is batshit insane and demonstrates no understanding that brains (and computers) are wired in specific ways and bound by the physical matter that composes them not to suddenly rearrange their physical molecules to emulate the “experience” of a pail of water or star or whichever. The pail argument is that free-moving molecules can in theory randomly construct a brain for a moment; not that everything that free-moving molecules can end up shaped like has to be free-moving too.

This is a different fallacy - or rather just a flat-out error. But at least it’s a different flat-out error than your special pleading, so it deserves some extra attention.

(Okay, actually it’s also special pleading up the yin-yang, because the exact same argument can be applied to brains too. How do brains get the ability to interpret their interpretation as interpretation? Or right, because of magic because brains are special and magic and have magic matter stored somewhere. (The appendix, probably.))

Anyway.

The answer to your “regress” nonsense is that arbitrarily selected “interpretations” are possible - and as long as they result in a workable behavior, any arbitrary interpretation will do. Of course once a self-sustaining interpretation implementation has been established it can adapt, mutate, and refine itself, perhaps altering the details of the “interpretation” in the process, building up to something more and more complex as it goes.

And while I’m sure you’ll flatly deny that arbitrary interpretations can exist, I feel obliged to point out that I use them literally all the time in my work.

Er, human minds change what they’re conscious of all the time without changing their physical state - it’s called “thinking” and “coming up with new conclusions”.

And people die peacefully in their sleep with no visible signs of harm all the time.

What, and you think that if you point a camera at a blue wall it mechanically shuts down? The fact that you’re experiencing the same thing continuously doesn’t mean your execution loop stops; it just means you’re not giving it interesting inputs to process.

(Unless you happen to find blue walls intensely interesting, that is. You do you.)

I like how you slapped “on computationalism” in there - it really highlights the special pleading going on here.

Interpretations can be arbitrarily selected (and by “selected” I mean “the mechanics of the implementation can fall into place by sheer random happenstance”), and once one “interpretation” mechanism is in place the “interpretation” will work just fine. You’re not talking about creating matter from nothing, you’re talking about creating information from nothing - and information can indeed be generated by random events.

Your argument depends on constantly moving the goalposts on what “calculation” and “computation” mean. My position is stable - that interpretations most definitely occur within physical and electrical systems, and that those interpretations, though possibly arbitrary, are stable and self-establishing. Outside observers can go wild interpreting the living crap out of whatever they want, but internal encodings of meaning are interpreted how the system arbitrarily (and in any stable, ongoing system, consistently) chooses to.

You’re presuming that thermodynamic fluctuations are nondeterministic, then? I’m not sure that the laws of physics agree with that. Not that it really matters.

I’ve been watching human behavior for like, dozens of minutes and have observed that they tend not to spend their time flopping and quivering on the floor like dying fish. At a very minimum the vast majority of our actions are carried out in a determined way. Choices made are determined by reasons, emotions, opinions - not cognitive coin flips. I’ve seen nothing in human behavior that requires or even implies that their brains are employing randomity.

But in any case this is all beside the point - you can include a random number generator in a computer too. There’s nothing magical or brain-specific about randomity - and you need something magical and brain-specific to justify your special pleading.

Whichever version of “computationalism” you’re talking about - the one that is occurring entirely within the box and not also within the brain of the human interpreter? The one that must include an outside human brain to do the interpreting (but not more than one simultaneously because your argument would explode)? Who knows!

A simulation is time sliced and discrete just like a movie. A simulation can be representing values in an N dimensional system while a movie is 3D (2D + time) and a projection. They aren’t very different.

Let’s walk through an example.

My planet simulation is implemented as follows:
1 - the planets of our solar system are represented as flat pieces of paper sitting on the floor (as if projected onto a screen)
2 - the movements are calculated by my brain and are executed by me pushing the pieces of paper to their new respective positions at each time slice

Where is the gravity in this simulation? In the papers (size and relative distance from each other)? In my brain (formulas for movements)? In the actual movements themselves? All of the above? None of the above?

But simulation isn’t values in an n-dimensional system; its output/state is, but that’s not the part that makes a simulation a simulation. The part that makes a simulation a simulation is the set of rules which are used to transform the state from state N to state N+1.

That’s the part that doesn’t exist at all within the frames of the movie, and it’s the part that matters. Now, HMHW’s argument is that he can make a “compression algorithm” that infers the rules from the output. Which is to say that he’s positing that by “compressing” a video of a human staring unblinking into a camera for five minutes, his compressor can generate a simulation that actually emulates the human accurately enough that we can expect it to possibly replicate sentience. Which is to say his “compression” will infer both a complete three-dimensional map of the objects and particles in the area accurate down to thier molecular level and current velocities, but also infer the complete laws of physics that are operating in the area, all just based on the images on the frames of the movie.

I hope his video camera has a really high resolution.

He also seems to presuming that he can reliably generate the *specific, unique * physical model and laws of physics that matches the reality that was being filmed - and he’s doing this right alongside another argument he’s making where the mechanics that produce an output explicitly cannot be inferred from the output state. Which is to say that if his movie compression example is coherent it instantly disproves his main argument.

He also apparently thinks that this “compression” process will work like literal compression, like crushing all the frames of the movie together tighter and tighter in a vice, until they’ve been smooshed down into basically a single frame. At the end of this continuous ‘smooshing’ process the simulation which is 3-D map of the space and the laws of physics are presumed to have magically been determined somehow. He then asks me to tell him how smooshed the frames have to be before the simulation is accurate enough to simulate cognition. If I fail to accurately inform him exactly where in his imaginary undefined simulation this turnover point occurs this is apparently a some sort of argument against simulated intelligence.

Putting aside the smooshing for a moment, this is a bit like me saying that the middle of a highway in Kansas is rural and the town hall of New York City is urban, and him demanding that I tell him at which mile marker the change from rural to urban occurs, with me not knowing which route he’s thinking of or being familiar with the city or state of New York at all. That I can’t is a disproof of the notion that New York is urban.

There are problems with his movie/simulation argument, is what I’m saying.

“Gravity” is the stuff that causes objects with mass to move towards one another. You’re moving the paper planets around, so you’re gravity in this model. Presumably your brain is a factor in causing the behavior of gravity to be consistent, but the force of gravity is the whole mechanism that is causing the objects to move.

The really interesting thing about the planet example where objects with mass are moved around by gravity is that (as far as I know), in real life we don’t actually know how either mass or gravity works. We don’t know why some particles have mass, or why they have the specific mass values that they have. We don’t know the mechanics for why gravity pulls them towards each other. We know it all operates extremely consistently and predictably and we know it’s been doing so for a very very long time, but if I were to posit that every subatomic particle has a number arbitrarily assigned to it and that some outside actor was looking at those numbers, consulting a chart, and then manually pushing things around based on the chart, you not only couldn’t prove this was wrong, you couldn’t provide a better answer.

Well, nevertheless, something like that is already possible (see my earlier links). The reason is that, basically, lots of the data is highly redundant, and can be generated from a limited viewpoint. Also, note that I explicitly specified a 3D movie of sufficiently high resolution.

And the firing rules of neurons are not, ultimately, that complex. So it’s not all that unlikely, really.

This is what I said; these are the constraints that are put on the possible descriptions. Basically, the claim is that a movie of my brain activity is reproduced by a set of general rules, plus an initial state. This is how compression works (with key frames of movies). It’s also how a simulation works.

Now, of course, you could create a different movie by using the same ruleset and a different initial state; it’d be like replacing one key frame with another.

And the idea that finding the rules of some system is equivalent to data compression isn’t original to me, but very well accepted, going back to Leibniz.

Of course I won’t accept this, because as stated, it’s wrong. Mass is not a number stored in a variable; it’s a property of a physical system that makes itself felt by, for instance, curving spacetime. A number stored in a variable doesn’t curve spacetime (well, not to an appreciable degree). So mass isn’t a number stored in a variable.

A different claim is that this number acts as mass within the simulation, curving spacetime there. But this claim already relies on what you’re trying to establish—namely, that a simulation is equivalent to the thing it simulates. So since I’m denying this, there is no reason at all for me to accept that claim.

Think about a map. Now, you won’t claim that the blue line is, actually, a river; you wouldn’t, for example, hold that it has carved the surrounding terrain into the bed it occupies now. It’s just a line on a piece of paper, nothing more; and what it represents depends, once more, on interpretation. It could equally well be a road, for example, or a vein in someone’s body, or something much more abstract—it’s shape could be a glyph for ‘gift’, for example.

Now, animate that map. Have it represent changes of the terrain over time. What changes about the line on the paper? Nothing of consequence. It will still remain that: a line on paper, a glyph, a symbol, albeit one that changes over time—no matter how those changes come about. It will not suddenly become a river, no matter how much detail you add to the map. By adding more map to a map, or changes of the map, or what have you, you don’t change the character of the map; it’ll remain a map, a description, and its elements don’t actually take on a life of their own at some ill-defined point of requisite complexity.

It’s the same with any other simulation.

These effects are just rules of the game you’ve decided to follow. Think about, perhaps, a ‘choose your own adventure’-style book. You can choose a rule according to which you choose the next sequence; changing that rule will change the storyline. But that doesn’t change its fundamental character as being, simply, a description. Neither does the fact that it describes things that may not actually have ever happened. Indeed, a good indication that this isn’t anything ‘real’ is the fact that it’s not even bound by consistency—things that flatly contradict themselves can occur in a description, but not in reality. So descriptions aren’t ‘real’ in that sense.

Your bizarre attempt to appeal to your own authority really isn’t doing you any favors here. I wouldn’t really apply the lofty title of ‘computer programmer’ to myself, but I do program as part of my daily job requirements; furthermore, I’ve got a PhD in quantum information theory, which required me to study information and computer science to a quite mind-numbing level of detail. I’ve also published a theory of how the mind generates meaning in the peer-reviewed literature. That one even won me a prize.

And what does all this amount to? Fuck all. I’m just some random guy making arguments on the net; those arguments need to stand or fall on their own merits, not on my authority. So I suggest you drop the ‘this works as I say it does because I know it does’-line of reasoning, and instead come up with some actual arguments.

Sure. I can also write a specification for my box, according to which it computes the sum of two inputs. I just have to label what switch positions and lamp-lights stand for, and that’s that.

But of course, that doesn’t change anything about my argument. The specification can be quite happily ignored, and the box used to compute f’ instead. Or, think of unconventional approaches to computation, like DNA-based computation: nobody ever wrote a specification for that.

But, more importantly—and I’m honestly kind of flabbergasted that this point still is lost on you—the specification still assumes an interpretation. In order to read it, you must speak the same language as the person who wrote it; you must interpret the glyphs and signs used to write the specification in the same way. But there’s nothing about those signs that forces that specific interpretation. You could take the glyph ‘1’ to represent the concept of 0, and the glyph ‘0’ to represent the concept of 1, for instance; you could then understand my specification of the box as computing f in such a way that it computes f’, instead.

There simply is no way to get around this need for interpretation. Any computation involves the association of abstract objects—things like numbers, truth values, and the like—to physical systems. Since physical systems don’t have abstract properties, that will always require an instance of interpretation.

I agree, it would—should you be successful in actually producing an instance where a computational process interprets another. That you’ve ceased to even try really tells me anything I need to know about the validity of your stance.

This is wrong, and shows you still haven’t even understood the basic argument. The physics of computers is exactly the same as the physics of brains. I have even been careful to point out that it’s entirely possible for ‘computers’—that is, the certain kind of physical system usually used to compute—to become conscious. They just can’t do so by performing the right sort of computation, since computations, like the meanings of words, are not an objective property of physical systems, while being conscious is.

This is confused. I don’t intend to demonstrate that the operations of the human brain are qualitatively different from those of a brain; I have demonstrated that whether one interprets them as computation or not can have no relevance to whether the system is conscious.

Consider digestion. It’s an objective property of certain physical systems that they digest food. Now, suppose somebody came up with a ‘computational theory of digestion’, claiming that digestion is just something that is performed by executing the right program. Then, my argument would be that one can interpret physical systems as computing various different functions; but that that doesn’t make a difference regarding whether they digest, and that consequently, no physical system digests via computation (and indeed, there is no real meaning to that idea). It’s the same for consciousness: it’s something physical systems do, but it’s quite simply independent of what the system computes—or whether it computes at all.

It’s not (and I do really wonder whether you can actually believe these silly little digressions you keep throwing up like so much flim-flam). That a system has an objective property in a given state does not entail that it has that property in every state, or that it can’t lose that property. Take, again, digestion: it’s an objective fact of the matter whether a system digests; but it very obviously doesn’t follow from there that digestion would ‘persist after death unto perpetuity as the body rotted around it’. If the state of a system changes, its properties change—after all, a change is nothing but the properties that a system has.

So, then, how come I can interpret a system as performing different computations, but not as digesting differently? If both are the same, why is one objective, and the other not?

The answer is, of course: computation pertains to abstract objects, digestion to physical matter; hence, the former requires, like and description, interpretation, and the latter doesn’t.

So. If I switch the simulation of a conscious brain on just for a short time, it won’t be conscious? That is, even though the state of the brain at a given instant when it’s on for longer is identical to the state of a brain in the instant I switch it on, one brain-state would support conscious experience, and the other wouldn’t?

Well, the usual reasoning is that in this case, you’re more likely to be one of those consciousnesses yourself, rather than the sort of creature—a human, typing away at a laptop—you take yourself to be. In that case, everything you think you know is most likely false—just a hallucination of a few milliliters of water in the sun. Most people don’t think that’s an acceptable consequence—in part, of course, because it would rob all of the arguments they’ve used to come to that conclusion of their basis, since whatever they were derived from was likewise just a hallucination. Hence, such a position is often called ‘epistemically self-defeating’.

No. The pail argument says that in a pail of water, enough complex interactions take place that they can be interpreted as any computation whatsoever, including that which gives rise to your conscious experience at this very moment. This has nothing to do with randomly constructing a brain.

No. The brain doesn’t ‘interpret its interpretation’, whatever that may mean. The brain thinks, feels, experiences, and so on, just like the colon digests. Interpretation is only necessary if you suppose that thinking, feeling, experiencing and the like were computational. They aren’t; so no interpretation is necessary.

I don’t have any idea what you’re trying to say here. Whatever is a ‘self-sustaining interpretation’? Do you want to say that systems sometimes just randomly start interpreting themselves? If so, then again, this just starts at the wrong end: the interpretation must precede anything the system does, if it is supposed to do it computationally, as without interpretation, there simply is no computation, and thus, nothing that could do the interpreting.

Well, that’s great! Then just give an example of the last such interpretation you used, and we can all go home!

If that were true, then physicalism would be wrong, and you’re positing something akin to dualism. Any change in the conscious content of the mind must be precipitated by a change in its physical state; otherwise, the same physical facts would be associated to different mental facts, and consequently, the physical facts wouldn’t suffice to fix the mental facts.

No. I’m just pointing out that there’s such a thing as a conscious state, and that it’s a property of that state to have some sort of phenomenal content. And, while I acknowledge that this may be the wrong sort of picture, it’s at least a widespread one, and nowhere near as easily dismissed as you claim.

No. It’s a necessary qualifier, because other approaches to consciousness don’t suffer from this problem, quite simply. Leaving it out would thus be wrong.

OK, then once more: give an example of an interpretation mechanism ‘falling into place by sheer happenstance’! Just any single old example will do.

No. You seem to keep forgetting my definition of computation—despite my suggestion of bookmarking the post—so here it is, once more:

Everything I’ve said is in line with that definition.

You, on the other hand, variously define computation as the deterministic processes inside the box, and some interpretation of it, you sometimes claim that all of the interpretations are what is being computed, you sometimes claim that some interpretation is singled out by sheer happenstance, and then you sometimes claim that there’s only particles moving around.

As I said, randomness is generally a valuable resource for complex systems, and it would be very surprising if the brain did not make use of it. And it seems that it does, in various ways:
Our Brains Really Do Make Lots of Random Decisions
A Surprising Use for Randomness in the Brain
The Noisy Brain: Stochastic Dynamics as a Principle of Brain Function
How the brain ‘plays’ with predictability and randomness to choose the right time to act

Did you notice that I introduced my argument by means of a simple box, not a brain? Do you think that box does something brain specific? No, of course not. The argument applies to completely generic systems. Once again, I simply can’t even begin to imagine how you could believe that my argument that was made without even a cursory reference to brains could depend on something that’s unique and specific about brains.

The only computationalism I’m talking about is the position in the philosophy of mind that says that minds are created by the computation of the brain.

This is how my body calcs and executes the movement:
1 - Walk around and pick up each planet
2 - Walk back to the northwest corner of the room where I have my pencil and paper
3 - Perform some calcs to determine exact location for each planet at next time slice
4 - Walk to the new coordinates and place the planets where each one belongs for the new time slice

In between time slices the planets are moving all over the place, but if you only look at the official time slices then they approximate gravitational effects.

So, it seems my body that is causing the movements isn’t exactly gravity because there is a lot of movement that doesn’t fit. How do we nail down where gravity actually is in this simulation?