Downloading Your Consciousness Just Before Death.

Without the hardware, you don’t know it computes the square root until exhaustively test it. Remember the famous Intel FDIV bug? Everyone knew that logic worked - until they found that it didn’t in all cases.
If it were easy to prove that a circuit implements a function as you think, we could have saved a lot of electricity running simulations on thousands of processors.

Your two output mappings are isomorphic. I don’t care what you call it, they implement the same function.
I’ve got a better example for you. Add 1 to 65, and you get 66, right? Well, if you are interpreting these things as ASCII, A + 1 = B. But if you did this on the LGP-21 I learned to program on, which doesn’t use ASCII, you don’t get A at all.

No, you have the same computation. Or equivalent computations if you must. A function is not defined by what it does like addition, it is defined as a mapping from its input space to its output space, and in your case it would be easy to prove f and f’ as being equivalent. I forget the proper term, it’s been a while since I took this stuff.

The ALU on your calculator produces a binary number, right? This is fed into a display which is supposed to show numbers. Say the display has a defect which causes it to isomorphically distort the numbers 0 - 9 to gibberish. Has your calculator stopped doing arithmetic?

Is this the Copenhagen interpretation of arithmetic? If a tot pushes buttons on the calculator he is doing arithmetic despite the fact that he can’t interpret what the display shows.

Easy. Same computation. And for f’ you wired up your display wrong (have done it myself) but that changes nothing.

If you’re of in “the universe is a simulation” camp then your simulation of gravity would be gravity to people living in the simulation. But yeah, simulations only go so far. You can’t produce gravity outside the simulation, though I’ve read some sf stories that kind of do this thing. Not very good ones, though.

I’m not in that camp, I was just trying to understand begbert2’s position.

Sorry, didn’t mean to imply that you were. But that’s the only situation I can think of where simulated gravity acts like real gravity.

A simulation of a black hole simulates the processes that occur when a black hole exerts gravity. It does not create real mass, or real gravity.
A simulation of a mind (if it existed) would simulate the processes in a mind. If the inputs and outputs of a simulated mind were the same as the inputs and outputs of a biological mind, then the mind would have been successfully simulated, by the terms of the hypothetical. Don’t fight the hypothetical.

You assume lots of things. But actually, I was in error—I didn’t clarify my notion of computation in response to RaftPeople, but rather, in response to you (I quoted it in response to RaftPeople). So while I have no reason to believe it’ll stick with you any better than the first time around, here it is again:

It’s a good reflection of the validity of the computationalist stance that in this thread, two people ostensible defending the same idea feel forced to resort to mutually exclusive, albeit equally ridiculous, stances—you, now essentially denying that anything ever computes at all, there being only the physical evolution (which, as pointed out copiously now, trivializes computationalism and makes it collapse onto identity theory physicalism), and wolfpup, who resorts to mystical emergence of some novel stuff that’s just gonna magically fix the problem. I suppose I can take some solace in the fact that the both of you at least seem to realize that the straightforward ‘vanilla’ version of computationalism—a physical system executes a program C[sub]M[/sub] which produces a mind M—isn’t going to work.

But of course, neither of your strategies—reducing the execution of a program to merely being a rediscription of the physical evolution of the system, or claiming that something else will happen along once things get sufficiently complicated—is going to help computationalism any; in fact, both abandon it explicitly.

In particular, on your response, no physical system has ever computed a square root, since square roots aren’t ‘particles moving around’; yet of course, square roots are routinely computed. So tell me, what do you think somebody means when they say that they have computed the square root of 81 using their calculator?

You’re also reneging on your earlier assertion that the internal wiring would make it obvious that ‘only one function’ is used to transform inputs to outputs:

Now, you’re claiming that the box doesn’t actually implement a specific function at all. In particular, you seem to have completely lost sight of the fact that the internal wiring was supposed to put matters of underdetermination to rest, rather now claiming that it’s the input/output mapping after all that determines the computation:

For this mapping, of course, the internal wiring matters not one bit. The two quotes are thus in direct contradiction. So what, really, did knowing the internal wiring allow you to conclude?

Because that example trades on mistaking the form of the symbol, rather than interpreting its content differently. There is a syntactical difference between MOM and WOW, such that the same system, being given either one, may react differently; the point I’m trying to make is, however, related to semantic differences—see my earlier example of the word ‘gift’.

For the box, your MOM/WOW example would be analogous to re-wiring the switches to the various boxes—thus, changing the way it reacts to inputs. But that’s not what this is about.

I have not backpedaled on anything, and frankly, your attempt to tar me with covertly shifting my position makes it seem like you’re childishly trying to score a cheap ‘win’.

My position, as outlined in the very first post I made here, is that the CTM, as entailing a claim that the mind is a computational entity, is indeed wrong, because there is at least one aspect of the mind that cannot be computational without lapsing into circularity, that of interpretation. I have, as soon as you started to claim that the falsity of CTM would undermine lots of current cognitive science, pointed out that that’s fallacious (in post #103)—the fact that one can model certain aspects of the brain computationally does not depend on the claim that, as the SEP article puts it, ‘the mind literally is a computing system’.

In other words, even if the mind is not (wholly) computational, computational modeling can be very useful. This is the position I’ve consistently held to during this whole discussion: CTM (‘the mind literally is a computing system’) wrong, computational modeling useful.

I don’t think I understand what you mean by a ‘class of problem’-distinction. But it’s completely clear that they’re different computations as defined via partial recursive functions. The typical understanding of computation would hold that since my f and f’ fall into that class of functions, and are distinct members of that class, they’re distinct computations, period.

Oh, you’re right, it absolutely doesn’t—it was begbert2 who insisted that showing the box’s innards adds anything to the issue, to the point of making it ‘trivial’ to decide which one it is. Now, he’s back to claiming that it’s solely the input/output table that characterizes the computation, but well, neither’s a problem for my position, so which way he flip-flops isn’t really of much consequence to me.

That’s why I chose my example such that both f and f’ follow from the same consistent interpretation of voltages and gates, yet still implementing different computations. So no, having a consistent assignment is not enough to specify one single computation.

Of course, the ‘consistency’-requirement is wholly arbitrary. I could easily consider one switches’ ‘u’ to mean ‘1’, and another’s to mean ‘0’. I could also just flip the interpretations of the lamps, leading to yet more computations. And so on. The key factor is always, as repeated at the top of this post, whether I can actually use a system to perform a computation. If I can, then, in my opinion, claiming that the system doesn’t really implement that computation is meaningless sophistry.

That’s not right, actually. The hardware on its own will tell you very little, in general, about what is computed—the reason being Rice’s theorem: every non-trivial property of an algorithm is undecidable. (Just think about the question of whether the computation will halt: just knowing the hardware won’t tell you in general—other than by explicit simulation.)

So basically, a computation is individuated by its mapping of inputs to outputs. After all, that only makes sense: a calculation likewise is just the result of some set of mathematical operations. The calculation doesn’t differ regarding what process you used to arrive at the result; whether you’ve used the Babylonian method or the digit-by-digit method, in both cases, you’ve computed the square root.

I think the opposite—there is no fact of the matter regarding which function a circuit implements, unless it is suitably interpreted.

If that were the case, then all functions with the same domain and codomain would be the same function, since there is always an isomorphism (a trivial permutation of elements) linking any two of these functions. Then, there would be no computing the square root, for instance, as computing the square root is trivially isomorphic to ‘computing the square root + 1’, and still, if we require a student to calculate a square root in an exam, and they take out their calculator to do so, we will mark them down if they calculated the square root + 1.

So no, isomorphism does not make the computations the same, without robbing the word ‘computation’ of all its usual meaning. Furthermore, if you intend to identify all such computations, then again, what you’re left with will simply be a restatement of the box’s physical evolution—of which lamps light up in response to which switches are flipped. In that case, as already pointed out, the claim that distinguishes computationalism from simple identity theory physicalism evaporates, and the theory developed to address grave problems with the latter (such as multiple realizability) collapses onto it.

Say my box has a defect in one lamp that changes the way it’s wired up. Thus, its input-output behavior changes. Does it still compute the same function?

Say you’ve never seen a calculator before, and find the one with a defect display. Do you know what it computes, anyway?

We recognize the calculator as defective, because it fails to fulfill its intended function. But, provided you don’t want to argue that what computation a system implements does not teleologically depend on the intentions of its maker (a position that would be disastrous for the computational theory of mind), once you strip that intention away, a system that fails to compute some function may well successfully compute another—it’s just that that’s not one we’re necessarily interested in.

It always gives me pause that nobody ever acknowledges the sheer insanity of the position that a simulation literally creates some sort of mini-pocket universe with its own laws of physics and the like. Think about what that would entail: somehow, the right pattern of voltages in a desk-sized apparatus suffices to conjur up, say, an entire solar system, complete with things like gravity, radiation, electromagnetic fields and all—where, even, does the energy to create all that come from?—which nevertheless is somehow hermetically closed off from us, except for some small window which somehow allows us to peek in on it, while still screening off all that gravitation and so on.

The universe from the simulation is thus both connected to ours—we can, after all, look in on it, which is the whole point of the simulation—and completely disconnected—there are no effects from gravity outside of the simulation.

But it gets weirder than that. We could also envision a computer built not from something as ephemeral as voltage patters, but rather, a mechanical device, that does nothing but shuffle around signs on paper (as in a Turing machine)—yet, the right sort of signs somehow suffice to conjure up stars and planets and all. This is almost literally the idea of magic—write down the right formula, and just by virtue of that, stuff happens.

Moreover, I trust that where a ‘simulation’ of a universe is held to be a universe, in some sense, a simple recorded movie of that simulation won’t be one. Otherwise, we’re back to a simple description being all that’s needed to make something real, in which case certain crime/horror writers would have a lot to answer for. So we’ll stipulate that some mere ‘description’ like a movie doesn’t ‘call into being’ a universe the way its actual ‘simulation’ does.

But where does a description end, and a simulation begin? In a simulation, one might hold, the various states of the system are connected by an internal logic, such that the state at t + 1 follows from the state at t. In a movie, however, I can, in principle, shuffle around the frames every which way; there’s no ‘computational work’ being done to connect them.

But then, suppose I compress the movie. Typical compression schemes will take certain frames of the movie (key frames), and compute successive frames as changes from those, to save space encoding each frame. Then, there is computational work being done to connect the different frames.

In fact, a simulation can be seen as merely a highly compressed version of a movie. The formulas computing the orbit of planets, say, can be abstracted from simply observing how they move around—as, in fact, they have been. So it’s entirely reasonable to imagine that a highly developed compression algorithm could compress the movie showing our simulation of a solar system in such a way as to infer the original simulation’s basic equations from it—in fact, exactly this has already happened: ten years ago, a machine inferred Newton’s laws from observations; these sorts of endeavors have blossomed in recent years, with neural networks now routinely inferring physical laws.

But then, must I be weary when compressing a movie, lest I compress it too much and it turns into a simulation, thus accidentally calling a whole new universe into being?

I think this is an absurd consequence, and the correct answer is that no matter how much I compress, or simulate, I always end up with what merely amounts to a description of something, and no description entails the reality of the things that it describes. So a simulation of a universe doesn’t call a universe into being any more than writing about a universe does. Likewise, a simulated pain isn’t any more a real pain than a pain written about is.

Sure, there are certain kinds of things that are equivalent among descriptions and the things described. For instance, if I write ‘Alice calculated that the sum of 2 and 3 is 5’, then the sentence contains that fact just as much as any real calculation does; but it does not follow, from there, that when I write ‘Alice saw a tree’, there is actually a tree that is being seen by Alice, nor does ‘Alice had a headache’ imply that there is somebody named Alice who is in actual pain. And exactly like that is it with computer simulations, as well.

A fully simulated pain would be just as painful as a real pain, because it would be a real pain.

Exactly. The only computation we are interested in is related to the one we input, and the one that comes out as an output. All the other computations are garbage. I do not claim that they do not exist- just that they are irrelevant.
If I type FISH into my laptop, the word ‘FISH’ appears on the screen - this is its teleological function. None of your garbage computations affect that, so they are irrelevant. If the computer fails to perform the one that I want, I do not care if the computer is still capable of performing other garbage computations - if it doesn’t do the one I want it is useless.

Good to hear. A question begged is a question answered, I guess.

As I do not seem to tire pointing out to you, that’s barking up the wrong tree. Inputs and outputs are the same for all these computations; again, that’s the very point of my examples. There is not one computation that produces these outputs (in terms of symbols on a screen) from those inputs (in terms of switches flipped, or keys pressed), but rather, they all do.

That’s neither an example of teleology, nor of a function, anymore than writing ‘fish’ on a piece of paper is. But that’s actually beside the point. The issue is with what a given set of symbols—such as FISH—means. A difference in the interpretation of these symbols—which, once more, will be the same for each of the computations under discussion—will lead to a difference in computation. Just as there is nothing about the symbols FISH that relates in any way, shape or form to actual aquarian animals, there is nothing about a certain lamp that makes it mean 2[sup]2[/sup] rather than 2[sup]0[/sup], say. But with that difference, the computation performed will likewise differ.

You are confusing processes with physical objects again, I see. If we can simulate the process of feeling a pain, then we would be simulating a real pain. If we could simulate the process of consciousness, we would be simulating real consciousness. Pain and consciousness are in a different category to mass and temperature, because they are processes, not physical characteristics.

The computer was designed to display FISH when I type, just as the piece of paper was designed to display the marks that I put on it.

Fish is also a card game, so the word could refer to that. Or it could be a person’s name. The interpretation in this case is irrelevant, just as irrelevant as your alternate computations.

No, because the interpretation is the computation. Everything else is just the physical evolution of the system. But what makes my box compute the sum of two numbers—which it unambiguously does, as I can actually use it to find the sum of two numbers I didn’t know before, which is all that computing something means—is my interpreting its inputs and outputs properly.

Else, you’re welcome to tell me what my box actually computes without interpretation that’s distinct from its mere physical evolution, yet independent of interpretation. Of course, like everybody else spouting big talk about this in this thread so far, you’re not gonna.

So if I type FISH but never look at the screen, no computation has been performed? Balderdash.

Rice’s theorem only applies to non-trivial functions. If I’m understanding it, many of the computations of interest would be trivial in those terms.
I also think you are misunderstanding the halting problem. The fact that it is impossible to prove that no algorithm in a domain halts does not mean that it is impossible to prove that a specific algorithm in that domain halts.
And the hardware, in many cases, tells you a lot about what is being computed. A full adder, for example - though the interpretation “addition” is one of many logically equivalent ones.

Assuming you can prove them equivalent, either by exhaustively comparing inputs and outputs or by proving equivalence of the internals.

Is there such a thing as an uninterpreted output? If not, then what you say is trivially true. But the output domain is part of the definition of the function, and if you don’t consider that to be interpreted then what you say is not true.

Clearly that’s not what I said at all. The mapping is between the outputs for specific inputs, just as in your example. The output domain in your example is not defined, which is unimportant, since it clearly at least includes the listed outputs.
Now adding one to the output represents a functional transformation. If you want to call that a different function be my guest, but it is typically not considered as such.

For trivial examples without state you can derive a function and computation given an exhaustive listing of outputs and inputs.
Explain please why you think multiple realizability is an issue here. I’ve read the wiki article on it and it does not seem to be a resolved issue.

The lamp is the interpretation. The computation takes place and produces outputs which are inputs to the lamp. Thus the computation is the same in either case. Now, if the lamp is defective in a way which maps several ALU outputs to a single lamp output, the the function has changed from the perspective of the total calculator, but that is because the function of the lamps have changed. If the defect preserves the 1-1 mapping of ALU outputs to lamp states, then the function has not changed in the total calculation, even if the interpretation has been changed.
Say a person is mute. Is the operation of his brain fundamentally different because he communicates in ASL versus a person who can speak?

Depends on how you define the calculator. As a whole, including the computation and the interpretation, it does not meet its specifications. But you’d be foolish to replace the ALU. Replacing the interpretation function, the lights, would restore it to its intended function. Unless you consider the lights to be fundamental to its computation, the computation is still being done even if the interpretation is defective.
Hell, turn a working calculator upside down. The interpretation of the output must change. Is the computation different?

Well, thank you, I guess, for so clearly repeating the previous wrongness. Take that last sentence, which I agree lays out your position quite clearly. It is directly contradicted by the quoted bit from the Stanford Encyclopedia of Philosophy, which actually goes out of its way to very explicitly define classical CTM as being precisely the theory that the brain is literally a computer (though I think most theorists today would prefer to say that mental processes are literally computational), and that CTM is not some mere modeling metaphor, say the way we model climate systems to better understand them. We are under no illusions that climate systems are computational in any meaningful sense, but the proposition that mental processes are computational is mainstream in cognitive science.

If you don’t believe the SEP you might note that CTM proponents in cognitive science describe CTM as the idea “that intentional processes are syntactic operations defined on mental representations” (i.e.- symbols). This view precisely reflects the computational paradigm, specifically the one set out by Turing, and that’s no accident.

I’m sorry if I was getting snippy, but I’m not trying to score “cheap” debating points; I am frankly deeply annoyed that you dismiss as “wrong” and impossible one of the foundational ideas in cognitive science today, manage to somehow misinterpret what it means, and base this conclusion on what is essentially a trivial fallacy about what “computation” really is. It’s well said that challenging well-established theories requires a correspondingly persuasive body of evidence. What you’re provided is a silly homunculus fallacy.

Show me where I said the system “doesn’t implement” either your f or f’ computation. I said it implements both of them. More precisely, it performs a computation that solves both classes of problem.

On your point that both functions (in your box example) obtain with the same interpretation of bit values, that’s another sleight-of-hand performance. Yes, in the case of logic gates, one needs only a consistent view of the mapping of voltage levels to bit values to define their function. In your box example, the nature of the problem being solved also requires a consistent view of the positional values of the bits, as in the binary number system vs the one you invented. A different view leads to a different (but computationally equivalent) description of the problem being solved. This is what I meant by the “class of problem” distinction.
One finds the same distinction in the AND-gate and OR-gate argument. One can trivially observe that the two gates are fundamentally the same: both have AND-behavior and OR-behavior. They are indistinguishable from AND gates if both inputs are the same, and if the inputs are different, both consistently produce either H or L. They’re effectively doing the same computation, and how we define H and L (TRUE and FALSE, or vice-versa) determines what we call them (i.e.- what problem they’re solving).

Gravity outside the simulation won’t affect inside the simulation and vice versa.
However, I agree with you about the absurdity of the “it’s simulations all the way down” hypothesis and that is because of energy. Information takes energy, and a full simulation of a universe requires energy for the information contained in that universe. (Which accounts for any possible compression, of course.) A simulation inside the simulation requires energy for that simulation as well as energy for the rest of the universe being simulated. Multiple universe simulations are even worse. Plus, our simulation time step seems to be Planck time, and that will take at least Planck time to simulate (actually much longer) so you’d need a really long research grant.
A full universe simulation requires that we are able to simulate intelligence, but simulating intelligence does not require a full universe simulation, so I don’t think we have to worry about this issue.

There is a vaguely interesting matter of interpretation here. The original Asteroids (the big dedicated unit that was in arcades and convenience stores) game did not use pixels at all. Its graphics were scribed as lines under direct vector control of the CRT electron beam. You have probably seen personal computer renditions that do in fact use pixels because vector displays have become essentially non-existent.

The game itself has not changed, just the manner in which it is displayed, which means the modern version is very clearly a simulation. I can look at Asteroids on a computer and see that it has only a functional resemblance to the original but the graphics, to my Luddite eye, are never as pleasing.

So how accurate does a simulation have to be to be indistinguishable from the original? Is there any wiggle room?