Do you believe it's possible the world is a simulation?

No matter if you want to call it ‘definition’ or ‘interpretation’—what matters is that an arbitrary choice must be made as to which physical signals correspond to what logical states. And the arbitrariness means that a different choice can be made with equal justification.

Of course—if you already have intelligent entities to begin with. This is exactly what makes it impossible to have an intelligent entity whose existence depends on interpretation—or definition, if you will: the intelligent entity needs to pre-exist to come to an agreement of how to interpret certain signs—voltages, letters, you name it. Hence, if we’re trying to create an intelligent entity, we can’t state with something that depends on interpretation—as computation unavoidably does.

That’s right: a painting is a physical thing, and as such, objective. What computation a physical system implements is not objective.

Any computation can be understood as a map from binary strings to binary strings; any such map can be implemented using Boolean logic. But which map is being implemented by a given set of gates depends on how you interpret the signals—the symbols—that are being manipulated: if you interpret low voltage (whatever you choose that to be—again, this is a level of interpretation I’m giving you for free, and still the problem remains unsolvable!) as 1, and high voltage as 0, then my above system is an OR-gate; if you interpret things the other way around, it’s an AND-gate.

I readily concede that you have the greater expertise when it comes to things like microchip design and testing, simulation, and programming and computer architecture in general. That’s in fact what makes these conceptual points so hard for you to grasp: if you’ve been a fish all your life, you have a harder time noticing water than if you’ve just jumped in and noticed the splashing. You’re used to things wearing their interpretation on their sleeve, because that’s how they’re designed, with the human user in mind; but the interpretation is just as arbitrary, and if one abstracts away from the human user, it’s completely clear that computation is not fixed by the physical system used to perform it.

Exactly. But between whom is this agreement if the computation is supposed to instantiate a mind?

That doesn’t mean that it can’t be changed in principle—and it’s this in principle malleability that’s all the argument rests upon. There is an intended interpretation, sure; but one, for there to be an intended interpretation, you need pre-existing intelligent beings to intend that interpretation, and two, that doesn’t mean you can’t change the interpretation afterwards.

But it doesn’t interpret—it doesn’t take a physical system to compute, say, the value of pi. This is, however, what’s needed to make a physical system compute a certain function.

Certainly not, which is the point I’m trying to make: there is no physical difference whether a system implements one computation, or another. You can interpret it as having performed computation A; you can equally well interpret it as having implemented computation B. I can use the system I outlined to compute the AND of its inputs; you can use it to implement the OR. Both of us will get the correct result for the function we chose.

That’s nice, but why is 0 volts a logical 0, and 5 volts a logical 1? Only because of an arbitrary choice that was made. Without such a choice, all that’s there is either 0 or 5 volts.

What you transport is akin to a blueprint; what you get out is what happens if something is assembled according to that blueprint. The assembled thing is not identical to the blueprint—you can’t live in an architect’s design.

If the computational theory of mind is right, then what your subconscious is doing is itself just a computation; but that needs itself interpretation in order to be any particular computation at all. Hence, it can’t be the interpreting entity, since it itself would need to be interpreted—which is just the homunculus regress.

Also, a feedback loop doesn’t help—computation A can’t interpret computation B as computation B, while computation B interprets computation A as computation A. It’s still possible to imbue a different interpretation, making computation A into computation C, and computation B into computation D—all you need to do is, say, flip your interpretation of high voltage from being logical 1 to logical 0.

A gut digests without computation; a stone rolls down a hill without computation; planets orbit the sun without computation. A consciousness is just as much a physical process as any of those; and all computation ever is, is a model of such physical processes, which is as different from the processes themselves as an architect’s design is from a house.

And yet, a Turing machine’s operation, and hence, any computation, is fully characterized by syntax. Thus, no semantics can ever come from a computation.

If you interpret it the right way—which means that it’s you who imbues this thing with semantics. On a different interpretation—flipping 1s and 0s—the program won’t be doing anything remotely related to ‘understanding a story’.

Yes, that’s exactly right! I’m using the principle of charitable interpretation, which roughly says that you should always interpret somebody’s arguments in their strongest way—otherwise, you weaken your own position, since you may end up attacking a weaker argument than the one being made. You’re doing the opposite of that—constructing a nonsensical straw-man argument to push over with great aplomb.

No. I am saying that every Turing universal system can compute every computable function.

Which is, of course, just another interpretation of the physical processes the real machine performs.

It’s sufficient to illustrate that a Boolean network with n inputs can be interpreted as computing any function of n variables—which is all I said.

I can just as well interpret the system I described as implementing a NAND gate.

I’m going to respond in this way from first principles, since the mistakes in the post above are too widely scattered for a traditional response to make sense.
Boolean algebras are rigorously defined. The meaning of 0 and 1 in a Boolean Algebra is also rigorously defined - as are the functions of that algebra, AND, OR, NOT.
By the way, the contention that you can call an OR gate a NAND gate and they are in a sense equivalent is provably wrong. You can implement any Boolean function with just NAND gates - you cannot with just OR gates. (Try inversion.) So calling the distinction arbitrary just shows ignorance of the subject.
Okay, now you have 0s and 1s you can map other symbols into them. You can make H a 1 and L a 0 or L a 1 and H a 0 - it does not matter at all. However once you have defined the mapping you must keep to it.
Now for voltages. The assignment of logic values to voltages is also not arbitrary, but is a function of the physical implementation of the Boolean algebra. Say I call 0 volts a 0 and 1 volt a 1. If I put a 1 volt signal into a NOT gate in today’s logic, I will get 0 volts out, as expected. If I put it into 1973 logic I’d get 5 volts out, since that logic would probably interpret 1 volt as a 0. Note that 5 volts isn’t even defined in today’s logic. And if I put 5 volts into today’s logic, I’d probably get smoke and a burned out part.
And even in a single logic family, as I said, voltage assignments are not arbitrary but are based on various requirements and the ability of the technology. People who design new silicon technologies are not making arbitrary choices.
Now, you can certainly design a chip where 0 volts is a 1 and 1 volt is a 0 as viewed in the outside world - all you have to do is to invert all inputs and outputs. But once you design any circuit you can’t arbitrarily reassign or reinterpret input or output values.
Now, if you decide to call 10 volts a 1, then your circuit becomes effectively a base 1 circuit, where the only value is a 0 and all functions map 0s to 0. (I’m the world’s leading expert in Base 1 logic. It is very power efficient.) But you run into all sorts of problems.
I may be too long in my sea, but I do know that if I call a gill a lung, and decide I can get out of the water and breath with my new lung, I’m going to be flopping around and in big trouble.

I think what HMHW is saying when he says that you can take any collection of random characters and call it the complete works of Shakespeare, is that if you ran the complete works of Shakespeare through a one-time pad encryption you’d get what looks like gibberish. But if you interpreted it using the one-time pad, suddenly the complete works of Shakespeare emerge from the random strings of characters.

And so, hey, any collection of random numbers contains the complete works of Shakespeare, if only you interpret it correctly.

But…the only way to interpret it correctly is to have that one-time pad to decrypt the message. Or another way is to take the supposed gibberish, figure out what you’d need to add to the n-th character to get the n-th character in Shakespeare, and say “when you reach the n-th character add y value”.

But you can’t retrieve the works of Shakespeare from the random strings unless you’ve either got a one-time pad that decrypts the message, or you have the works of Shakespeare so you can recreate the supposed one-time pad that would have created Shakespeare from the random strings.

I suppose the objection could be that if you have a plain text that seems to be the complete Shakespeare, how can you be sure it’s really the complete works, unless you happen to have another copy of the complete works to compare it too? I mean, maybe on page 246 there’s a missing comma? And this means you don’t have the correct text, just something that seems a lot like it?

But suppose we dug up a parchment from some desert tomb, and it’s in ancient Greek, and it seems to be a lost work of Aristotle. Are we supposed to say that, since we don’t have the original there’s no way to interpret the text? We can’t say it’s the lost copy of Aristotle’s poetics, since we can’t compare it to an existing copy? It could be a collection of random strings, and if we applied the right key it would really be Harry Potter fanfiction?

But agreement on definition means that interpretation is not arbitrary. Clearly a simulated intelligent entity can only be understood as such by seeing its output, which might require learning whatever language it used. But this learning is not arbitrary. I can’t shuffle a French-English dictionary and say I can understand the resulting translation.

So, is the output of a computation that is a jpg file which can be viewed as two coke cans a physical thing or not? Sure you interpret the binary in the file a certain why, but I brains interpret the signals from our eyes a certain way also.

You are assuming that all truth tables are symmetric. You can convert ands to ors - but it involves inversion, not simple renaming.

With no one observing it is unknowable if a computation instantiates a mind. And if we don’t have a definition of mind we couldn’t know either. If I interpret the process of addition inside a computer as consciousness, then I can claim I designed a conscious mind. All interpretations are valid, right? But I wouldn’t be saying anything interesting.

Any designer knows definitions don’t just change in principle. But to get the desired result from the implemented specification, you have to change the implementation also. If someone wants a new feature, you unfortunately can’t tell them to just look at the output you already have with a new interpretation.
And there is another problem. Simulated beings clearly can’t exist without existing intelligent beings to create the simulation. However consciousness in principle does not require pre-existing consciousness, since our consciousness arose without it. (I’m agreeing that we are not simulated here.) We might, in principle, create evolution inside a machine to allow a conscious mind to evolve. We don’t know if it did unless we observe, but we might do it with enough time. If our conscious minds evolved without observers, so could the simulated one.

A Lego kit has a blueprint and materials. Give a kid a Lego kit with just the blueprint and see what happens. My transporter kit includes both the blocks and the blueprint. If I take my building down, pack it back into the box, move it, and rebuild it someplace else, do I have the same building? Clearly I don’t if I just put the directions in the box, unless I can get the material from someplace else. Is it the same building if I get all the pieces from a Lego store?

Observation only demonstrates it is a computation, but lack of observation does not prove it is not. My subconscious is good at anagrams. Clearly when I look at a Jumble and it pops the answer out, it is doing a computation. But what if it looks at a letter pattern and does not pop the answer out - since it didn’t solve it yet. Is it not doing the computation? Does it start the computation at the exact instant I become aware of the answer?
And you didn’t respond to my point about generating web pages no one ever sees.

Why not? My old genius dog reasoned things out and solved problems - but was not aware that he did. When he figured out a treat machine when I was there to see a computation, but not when I wasn’t around?

Irrelevant - I was talking about the cell’s interpretation of stimuli, which forced meaning in one sense on the stimuli. What happens to the food particle after ingestion is irrelevant.

A Turing machine is not just a tape, it includes the program that interprets the tape.

Nor will it if you insert random bits inside the code. That you can destroy the meaning and function shows that they are not arbitrary.

What do you think exists in the program of a Turing machine? A Turing machine, or computer, can of course compute any computable function with the right programming. A particular one cannot. As I said, we use Turing machines since they are easier to prove things about.
However his argument does not require a Turing machine at all. Any computer is equally good - it can compute any computable function. So why did he use a Turing machine? To show off that he really understands this computability stuff? Didn’t work for me. Or to trivialize computers, since his readers might be interpreting the output of their computers as being intelligent behavior?

You’re completely misunderstanding the argument I’m making.

Let’s try again. I give you a physical system; your task is to decide what computation this system is implementing. To this end, you make various measurements. Ultimately, you discover that when you apply a high voltage—say 5 volts for definiteness and old time’s sake—to both inputs, you get out a high voltage at the output; otherwise, you’ll get a low voltage—let’s say 0 volt.

Now, what do you know about the computation—the Boolean function—being implemented by this system? Can you tell me which function is being computed?

The answer is, of course, no. You know nothing about what computation occurs, because the system doesn’t come with an interpretation. You can fix an interpretation, calling 0 volts ‘logical 0’, and 5 volts ‘logical 1’, and then you’ll have an AND-gate; you can equally well call 0 volts ‘logical 1’ and 5 volts ‘logical 0’, and you’ll have an OR-gate. (You can also call 0 volts ‘0’ and 5 volts ‘1’ if applied to the inputs, and 0 volts ‘1’ and 5 volts ‘0’ if applied at the output, and then you’ll have a NAND-gate. But, baby steps!)

Is this clear so far? Do you understand that, without changing anything about the system, we can use it to compute both the logical AND and the logical OR of its inputs? That it’s just a matter of interpretation—of mapping physical states to logical states, to states of the computation?

If not, then tell me how you’ll solve the general problem: I give you a physical system, built using some fixed principles that I’ll however not divulge to you. Can you tell me what that system computes? If yes, how do you do that? If not, why not?

Rather the other way around: if you have some jumble of symbols—like, for instance, my previous post—without knowing the code, the map that takes symbols to certain meanings, you have no way of knowing if it’s gibberish, Shakespeare, or a desperate internet rant trying to make some points about the arbitrariness of computation. Because without the code, it is none of these things—or rather, it could be any of them: you could use a code that takes it to Shakespeare, Voyager one that takes it to gibberish, and myself one that takes it to late-night philosophy.

Neither of us is right in any objective way, because the meaning is not in the symbols—the syntax doesn’t fix the semantics—but only in the interpretation of the symbols. The same goes for symbols manipulated by some computing machinery—there is no sense in which you can say, ‘those symbols mean such-and-such’, and be objectively right. Whenever you do so, you’re invoking convention, definition, and interpretation.

Tangentially related, sometimes I wonder if my suicide attempt in '90 was actually successful and my own personal version of hell is to keep going as if nothing happened.

Remember when I said that there is an arbitrary mapping from the values used by a Boolean algebra to low, high or voltages? Without knowing that mapping you cannot say you know the truth table. You can build two truth tables - 1 with 5 volts being 1, one with 5 volts being 0. And you can easily derive the Boolean function for either case. But when you withhold the voltage specification either can be equally true.
It is kind of like Tom Lehrer’s example in “New Math.” The whole problem is very different once you tell us that we’re working in base 8.
We deal with this uncertainty all the time, actually. Defects cause a gate to exhibit different behavior from a good gate. When you try to diagnose a failure, sometimes you come up with a list of possible causes.
In any case, in your example you will get exactly two possible truth tables, and neither are arbitrary. Lack of information leads to more possibilities - nothing surprising about that.

Say you have an encrypted string. It may look like gibberish (though you can measure the information in the string and see that it isn’t). If the mapping was arbitrary, any decryption key would be equally valid. Do you think this?
You can have one word which means very different things depending on which language you are reading it in. Do you think this is arbitrary?
I read the book by the guy who developed the languages used in Game of Thrones, and language development is far from arbitrary - much less so than I had thought, actually.
And to repeat, we interpret all our physical sensations. If you think “yellow” is any more real than simulated yellow you are fooling yourself. We arbitrary assign the name to a portion of the spectrum, but what we see when we see yellow is far from clear. So a simulation need be no more arbitrary than real life.

Allow me to try to intervene once again. It may be that we’re just talking past each other and I was pretty much resigned to this point, but it seems to me quite obvious that there are, not one, but a whole handful of objections to your recurring argument about the interpretive nature of computational syntax. Here are a few that come to mind:

[ul]
[li]The same objections apply to our interpretation of our sensory inputs, and the same objections could be brought to bear on the apparently arbitrary nature of how neurological activity could be interpreted. Our concept of reality exists as a viable and commonly shared viewpoint because those interpretations have become fixed by an interpretive framework endowed by evolution and experience. It is, however, quite arbitrary. We think we see the world, but we only see it in a very narrow range of the electromagnetic spectrum, in a narrow range of physical scale (the quantum world is invisible to us, so is any true perspective on the scale of the galaxy), and in a narrow range of temporal scale (we cannot see a beam of light moving, nor can we see tectonic processes that move mountains and continents). More fundamentally, perhaps we are constantly branching into an infinity of multiverses as per the Everett “many worlds” interpretation, but are only ever aware of one solitary path. So what is “reality” except an agreed-upon view that depends on this fixed, arbitrary genetically endowed interpretive framework? Meanwhile the “arbitrary” nature of neurological activity has become fixed by a common but arbitrary interface to motor activity which allows us to speak and write our thoughts.[/li][/ul]

[ul]
[li]In exactly the same way, a computational intelligence is arbitrary at the trivial level of the interpretability of its operational mechanics, and at the same time fixed by the interface we have built to it, which functions as the common interpretive framework that governs all our interactions with it. We are not concerned with high and low voltages, we are concerned with the unambiguous interactions at the interface. To cite your example of agreement about the difference between a chair and a whale, if a computer presents us with a picture of both, we can agree on what they are with no more and no less ambiguity than we can on our retinal images of the same objects.[/li][/ul]

[ul]
[li]On a point directly related to the last one, you are quite wrong in alleging that my hypothetical philosophy book written by my hypothetical simulated human is a “matter of interpretation”. Of course in a sense it is, because we have to agree on language and its written symbols, but this is no different than would apply to any book in the real world. My point is that by reading and copying the book from the simulated world, it has been instantiated as an actual thing, notwithstanding your complaint that the virtual world doesn’t really exist.*[/li][/ul]

[ul]
[li]And finally, I would argue that the alleged “homunculus fallacy” is just silly because, depending on one’s philosophical framework, it either applies equally to both human and artificial minds or it applies to neither of them, for the reasons above. There are in fact credible if controversial reasons to suppose that the computational theory of mind, while a wonderfully capable explanation and the best such theory by far for many mental processes, is not by itself a complete theory for all such processes, but such arguments have nothing to do with the “interpretability” argument.[/li][/ul]


  • A mundane analogy to illustrate this point. My favorite income tax preparation program no longer runs under my preferred OS. But I can run a virtual machine that runs a different OS and install the program there, and then export the result to my real machine and print or e-file it. The virtual machine had no physical existence and when I shut it down it vanished altogether, but the tax authorities aren’t going to quibble that what I sent them is merely an arbitrary interpretation of NAND gate voltages from some non-existent virtual reality!

BTW, I’ve just started reading Permutation City. I’m just in the first chapter but quite interesting. I thought it was going to be about people in a simulation accidentally discovering that fact, but (at least judging by the first chapter) it’s about early and imperfect simulated reality which people can voluntarily enter as brain-scanned “Copies” of their originals.

One more thing. You’ve over-specified things by saying 5 volts. What you would actually do is to slowly ramp one of the input voltages until the output changes. The switching voltage is not arbitrary, but is set as I mentioned based on technology and the process.
Plus, the exact value of a “1” will differ for different instances of the chip. We have a thing called spec search where you apply a test at decreasing voltages until it fails. The last passing voltage tells you Vmin, the minimum voltage at which this chip has the logic function in the spec. It is theoretically possible to find the exact gate which changed its function to make the test fail - but we never do. So we are quite aware that logic function depends on the voltage. But none of this is arbitrary.

And one more thing [/Columbo]
One of the things you learn in logic design is transformation. One of the most popular is converting an AND gate to a NOR gate with both inputs inverted (or an OR to a NAND.) I should have remembered this earlier, but it’s been decades since I’ve designed anything and in any case logic synthesis tools do it now.
This is exactly the transformation being discussed. The two are of course logically equivalent. So it isn’t very interesting and certainly not arbitrary.
Why do people (and synthesis tools) do this? First, NANDs and NORs are cheaper than ANDs and ORs, since the implementation naturally inverts the output. Or you may have inverted inputs already.
But it is not something to base much of anything on.

Actually, you can build more than two truth tables—any mapping of input/output voltages to logical values is just as valid, like the one I gave about resulting in the NAND-gate.

Anyway, if I understand you correctly here, you’re granting me two things:
(1) The physical system alone does not fix, in an objective way, the computation it performs
(2) In order to have a physical system compute something, we must fix an interpretation taking physical states to logical states
Furthermore, you hold the following to be true:

(3) Mind is the result of an appropriate computation: any system instantiating the right computation, instantiates a mind

Would you agree with these statements? If so (I’m not going to spring some lame ‘gotcha’ trick on you here, just trying to make sure, for once, that we’re on the same page), these two corollaries follow:

(4) From (1) + (3): The physical system alone does not fix, in an objective way, whether a mind is instantiated or not
(5) From (2) + (3): In order to determine whether a physical system instantiates a mind, we have to fix an interpretation in an arbitrary way

Are you with me so far? If not, where do you disagree?

Shannon information, which I presume you allude to here, is only an average measure, computed using the probability of all the code words—i.e. something you need to know the code to compute. Even if you’re talking about algorithmic information, which gives you a valid notion of ‘information content’ for a single string, it’s still the case that very low-information strings in the code text can map to high-information strings in the plain text—provided you have an equal amount of high-information code strings mapping to low-information plain text.

That is, if ‘101’ maps to the entire text of Hamlet, at some point, something must happen, like a very long string mapping to ‘he’, or something.

Natural languages are subject to natural constraints; codes aren’t. Any mapping f(x) = y, where x is a plain text string and y is cipher text, is a valid code; any such mapping can be imagined as an (theoretically infinite, in practice just universe-shatteringly enormous) code book, with plain text entries in one column, and cipher text in the other.

No, we don’t; a sensation is a physical occurrence, and is just exactly that physical occurrence, without any interpretation (like a chair is a chair, and not a whale, without interpretation). Anything else leads to a picture where sensations are displayed on some kind of internal screen, for the purpose of our little homunculus to interpret: which is a viciously circular picture.

[quote=“wolfpup, post:170, topic:755543”]

[ul]
[li]The same objections apply to our interpretation of our sensory inputs, and the same objections could be brought to bear on the apparently arbitrary nature of how neurological activity could be interpreted.[/ul][/li][/QUOTE]

Again, we don’t interpret either our sensory inputs, or our neural activity—this directly leads to the homunculus. Rather, our neural activity is exactly that physical process that it is, and nothing else, just as the movement of stars in the galaxy just is what it is, without need for interpretation (whereas a galaxy-simulation unavoidably needs one).

What happens if we have a perception, say, see something, is (something like; this is just a cartoon) the following: a photon impinges upon the retina, exciting some rhodopsin molecule; this then de-excites, triggering some sort of cascade release of chemical triggers, crossing various synaptic clefts, exciting various neurons that change the frequency of their spiking, eventually reaching the thalamic nuclei, then being forwarded to the visual cortex, causing patterns of excitations here and there. Real patterns, mind—there is no interpretive uncertainty about them. And that’s, ultimately, just what perception is—though how exactly that works, we don’t know yet.

Now think about what would happen in a computer: something that you could interpret as the simulation of a photon (or not) impinges on what you could interpret as a simulated retina (or not), triggering what you could interpret as a rhodopsin excitation (or not), and so on (or not). This is not something that just is what it is; it is what you interpret it to be.

But clearly, such interpretation can’t play any role in the actual mind: it immediately invokes the homunculus, which looks at these processes on its inner screen and takes them to be something other than what they are—i.e., interprets them. But how does it do so? After all, all that goes on inside it (if the computational theory is sound) is just more computation in need of interpretation. So it needs its own little homunculus, with another little homunculus, and so on. And worse yet, this infinite ladder actually must be traversed: the n+1st homunculus must be properly interpreted, before it can in turn interpret the nth.

That’s not a matter of interpretation, though, it’s a matter of interaction: we don’t see x-rays, because we have no appropriate interactions with them; likewise, interaction becomes negligible far above the level of size of individual quantum systems; and so on. The fact that we only interact in the domains we do is, again, quite independent of any interpretation.

Even if that were right, then again, it’s just a matter of interaction: we are entangled with, say, a measuring apparatus; hence, we only see the ‘spin up’ result if we are in an appropriate ‘see spin up’-state. Again, interpretation has no role to play here.

Reality is simply that about which objective statements of fact are possible: the thing is a chair, not a whale. (In quantum mechanics, before you bring that up, these objective statements are the expectation values of measurement operators.)

Again, there’s nothing arbitrary (at least, not in an interpretational sense) about it: it’s not that a muscle, say, tabulates the inputs it gets, in order to then interpret it as the command ‘flex’ or ‘relax’; it’s due to the way the muscle is built that a given electrochemical signal causes it to flex. In the same way, a billard ball pushes another one, causing it to move—the second ball does not ‘interpret’ the ‘signal’ it gets from the first ball, it’s just an ordinary chain of causality. That exactly this chain of causality is realized in the motor system is, of course, a calcified accident of evolution, a contingency; but that doesn’t make it an interpretation.

Yes, this is one of the most pernicious cases: we’re not doing any interpreting here, it’s just obvious what’s being shown! But that’s not so. The interpretation is simply very fine-tuned to our biology, and our expectations, so that, like air, we hardly notice it. But consider Magritte’s insight: ‘Ceci n’est pas une pipe!’

Think about hieroglyphs: you might see pictures—say, eagle water fish reed. You might then concoct a scenery in which the eagle catches the fish from the water by the reed. But to somebody knowledgeable in hieroglyphs, it might just mean ‘Rama-Un wuz here’.

Images are treacherous. Someone could use an extremely sophisticated version of hieroglyphics, such that what you take to be a depiction of a chair, to them, means something very different—say, ‘the enemy attacks at noon’.

After all, we don’t take the image of a chair as that what it is—a collection of pixels, forming the likeness of a chair, i.e. a two-dimensional image—but we interpret it to mean chair. And while this seems a very obvious interpretation to us, that doesn’t mean somebody else couldn’t use a different one.

This is in fact the root of the whole confusion that I’m trying to clear up here: in most cases, the ‘right’ interpretation seems so obvious to us, that we don’t notice we’re doing any interpretation at all; hence, we suppose that the thing we’re looking at just is the thing we’re interpreting it as. But nothing could be further from the truth: an image of a chair is not a chair, as you will quickly discover by trying to sit in it (or by trying to smoke Magritte’s pipe). A chair is a chair. (Or, if you want to be nit-picky: that particular collection of molecules called ‘a chair’ is that particular collection of molecules which gives rise to all the—real, objective, non-interpretation relative—properties that make up ‘chair-hood’.)

The thing is, though, that you first must interpret the computation you’re performing as being that of a simulated world in which a book is being written. Without that interpretation, there simply is no book—there are symbols on a screen, as unintelligible as hieroglyphs. So it is only upon the right interpretation that this book comes into being; a different interpretation would result in a different book or no book at all!

Well, as I said, it was the interpretability argument that convinced the founder of computational functionalism that it must be wrong, so there’s that.

As for the homunculus fallacy, it’s a very well established death knell for any theory of the mind to contain a homunculus, because it simply indicates vicious circularity. The argument ‘it’s silly because I can’t think of a way to avoid it’ really doesn’t cut any ice, here. The problem is simply that if you have a little man in your head to ‘look at’ your perceptions, or interpret your sensory inputs, or neural excitation patterns, then either there is some mechanism inside the homunculus that makes its perceptions/inputs/excitation patterns definite without relying on interpretation—but then, the homunculus was never needed in the first place. Or, it has in its head another homunculus, which does the same thing for it that it does for you. But then, the n+1st homunculus’ perceptions/etc. must be definite before the nth one’s can be, and hence, we would have to traverse an infinite tower of homunculi before anything is ever perceived or interpreted, which is of course absurd.

Of course: after all, you have interpreted the program’s output as being tax statements; or, if you just sent these outputs to the IRS, then the person responsible for your tax account there will make the same interpretation—after all, while the interpretation is arbitrary, that does not mean we can’t share it with others. But it also doesn’t make it any less arbitrary: somebody else could, if they so desired, interpret the program you ran as something else entirely.

Indeed, that’s exactly what the virtual machine does: it’s a layer of interpretation, interpreting (provided it is itself being interpreted in the right way) your OS’s operations as those of the OS that your tax program needs.

After all, all that happens at the machine level is just voltages being shunted around; your OS makes them into a repertoire of instructions; the VM then interpreted the instructions of the old OS in terms of the new ones.

For most of your objections, it suffices to look to Integrated Information Theory to rebut them: after all, there it is actually the case that a simulated mind will not be conscious, while a real brain (or an appropriately built analogue) will be—the simulation does not produce enough integrated information, which is a readily measurable and calculable quantity. Hence, all arguments that what can be done in the real world must be possible in computation/simulation fail, by explicit counterexample.

I actually made a mistake earlier on—I linked to an excerpt from Egan’s novel Diaspora, rather than one from Permutation City. I hope I haven’t accidentally misled you with that. (But anyway, Diaspora is also excellent, and explores many similar themes—the chapter ‘Orphanogenesis’ is still a better defense of computationalism than I’ve read in much of the philosophical literature! Plus, lots of fun with the theory of fiber bundle spaces…)

We need to distinguish the mathematical truth table from a physical truth table. In the mathematical truth table you have 0 and 1, and you can rename a 1 as One, one, High, hi or even low and still have equivalent truth tables. Voltage has nothing to do with this version.
For a physical truth table, voltage values are mapped into logic values. There is a range of voltages which cause the gate to see a 0 and another range that lets it see a 1, and yet another range which cannot be represented by the Boolean truth table. (The Z value I mentioned before.) In particular, the voltage range which works as a 1 or a 0 varies between processes and even between parts manufactured in a process. Part of the engineering is figuring out how to balance the desire for low voltage 1s (which use less power and are faster, since it takes less time for a 0 to become a 1 volt 1 than a 5 volt 1) and yield and reliability, since higher voltages have more room for process variation.
Our work is to make the physical implementation looks as much like the mathematical truth table as possible, and still meet requirements for speed and power which aren’t represented in the truth table at all.
None of this involves any kind of interpretation.

No - each instance of a physical system, at a certain moment in time, does fix the computation. Variation involves variation in inputs - which might be hard to see, and not all inputs are traditional inputs you would see in a truth table. A defect in a power input, not represented at all by Boolean algebra, can affect the performance of a chip, for instance.
Different instances of a physical system are different. Unlike software, we don’t know how to make exact copies. (If we did I’d be out of a job.) A defective system has a different truth table from a good one. (I should say faulty system - there are defects which are undetectable.)
I said at a moment in time since there are factors which can cause defects after a certain amount of time, so chip 2020 might not perform in the same way as chip 2016.
But if you take physical system to include the environment, the physical nature of a single chip does fix its computation, in an objective way.

You are confusing interpretation with specification, and acting as if you found the chip on a beach somewhere. The physical to logical mapping is instantiated in the design and specification phase. You can design so that 5 volts is a 1, or that 1 V is a 1, but once you do that no interpretation is necessary, unless the data sheet is really bad.
Now for purely combinational logic you can invert everything if you wish, and call 1s 0s and 0s 1s. But you can’t for more general logic.
Take an asynchronous clear input on a flip-flop. When you power up, you don’t know what state the flop is in so you call it X. You can design an input to the flop so that if you put a 1 on the CLR line the output of the flop goes to 0, a 1 does not change the output at all. This line is clearly held to 0 except during intialization of the circuit.
Since it is all a matter of interpretation, say you just flip all the bits. You know hold this line to 1 except during intialization. And instead of the output of the flop changing during normal operation, it stays at 1 (formerly 0.) It is not even remotely the same computation. (And don’t get me started about what happens to the clocks and non-periodic control signals.)

Now that I agree with, with the proviso that I have no idea of what the appropriate computation is.

Irrelevant now since I don’t agree with [1] and [2]. However, different physical systems will instantiate different computations and different minds. But we see that already with the brain. Brain defects can cause certain people to do very different computations - for instance becoming paranoid or sociopaths. And we know that the brain changes over time leading to different computations. And of course we can, with drugs, change the computation. Do you consider the sociopaths mind identical to the Alzheimer’s patients mind equivalent to a “normal” mind?

A string of N ones, N being large, has less information than a string of N random bits. No code words are involved. No code word probabilities are involved.

If you are talking about the information on a channel, the 101 cannot map to Hamlet (and thus represent a large amount of information) unless Hamlet has been transmitted previously.

What constraints are these. Natural constraints, that is, not constraints from our human limitations.

If you ever read anything about vision work in AI (which was just starting when I was in college) you’ll know that distinguishing a chair from a whale is not so easy, and even recognizing a chair is not so easy. You have a model of a chair, but is a box a chair if someone is sitting on it? If you hammer on a back? My office mate had an ergonomic chair which looked nothing like a normal chair.
And surely you have both seen things in a scene which are not there, or did not see something which was there. Or seen the optical illusions which make two equally long lines seem different in length. Saying that we sense a perfect representation of the physical world goes against at least a century of psychology.

It’s not a simulation, it’s a dream. My dream! Best of luck to you all when I wake up.

OK. So, first and foremost—all of the details about actual chip construction, voltage tolerances, etc., serves only to obfuscate. We’re concerned with a very simple question here: given a physical system, what does it compute?

I maintain that there is no objective answer to this question; that you can’t discover the computational properties of a system the way you can discover its physical properties, say, its mass, its charge, and so on. This seems quite obvious to me, and I’m honestly struggling to see what your hangup is. It is, after all, nothing but the statement that you can’t find the meaning of a coded text, if you don’t know the code, and that there’s no universal code breaker.

But anyway. Let’s take a step back again. I have the physical device I outlined above, showing behavior as outlined above. Now, you’ll agree, barring caveats on implementations and the like, that such a physical system is possible: that I can with sufficient reliability apply low voltage to the inputs, and get a low voltage out, and apply high voltage to the inputs, and get high voltage out, and so on. Within certain tolerances (which we can freely choose to make things work as we want them to), I can thus create a physical device that shows the behavior described above. With me so far?

OK, assuming that you are, now I write down, in my ledger, the following ‘l : 0, h : 1’—or, if you prefer that, ‘1 +/- 0.5 V : 0, 5 +/- 0.5 V: 1, else: error’.

Now, I give this device, together with the page from my ledger, to somebody (person A) who desperately wants to know what the logical AND of 0 and 1 is. They can then use my device to implement that function: apply 1 volt to the first input, 5 volts to the second, and observe the output of 1 volt, to within acceptable tolerances. They consult my ledger, and find out: 1 AND 0 = 0!

Alright, after being done with that, they hand me back the device. I keep it exactly as it is. However, now I write in my ledger: ‘l : 1, h : 0’ (or again, the above mentioned specification thereof, if you prefer). Then, I give the whole thing to person B, who wants to know what the logical OR of 0 and 1 is. So, they apply the requisite voltages, observe the outputs, look it up in the ledger, and find: 1 OR 0 = 1!

We do the same thing again. I write in my ledger: ‘For inputs: l : 0, h :1; For the output: l : 1, h :0’. I give it to person C, who—you may have guessed it—wants to know what the NAND of 0 and 1 is. He measures, looks up and—1 NAND 0 = 1!

Now, do you agree that A has computed the AND of his two inputs, B has computed the OR, and C has computed the NAND? If not, what must happen in order for them to do so?

Presuming you agree that, indeed, A, B, and C have performed the computation they wanted to perform, let us continue. Now, you’ll note that the physical device they used to perform the computation was, in all three cases, the same one. It showed exactly the same behavior. Yet, it implemented different computations. Hence, what computation is being performed is not fixed by the physical device. Do you agree?

You can’t observe a physical device and conclude, say, that it enumerates the digits of pi. It’s not something you can discern by careful study of the device. It’s not a property of the device like, e.g., its mass. What you can do is assign a mapping from the states of the system to the logical states of a computation—but that involves, always, the making of an arbitrary choice. You can equally well assign a mapping that takes the states of the system to those of a chess game it plays against itself.

You talk about ‘design specifications’, ‘faults’, and so on—none of this matters. Whatever the intended interpretation, I can take a different one—the intended interpretation is not a natural law. Thinking that whatever the maker of a chip intended to be the interpretation of the chip’s physical states in computational terms makes it now somehow the objectively right interpretation is like thinking the US government could make gravity illegal, and have the universe obey its command.

The situation is exactly as if we had found a chip on the beach: only if there is an objectively right interpretation of its states in computational terms do we not need to make an arbitrary choice in assigning computational states. But if we do need to do that—which we unavoidably have to—then there is no objective fact of the matter whether a given physical system gives rise to a mind. And with that, the computational paradigm collapses.

Thou’rt the Red King! Sleep long and peacefully! (“Heed no nightly noises.”)