Do you believe it's possible the world is a simulation?

However, in a sim, you don’t need to “input” anything; the observer’s gaze is one of the simulated properties. It could easily be a maintained variable.

(Worse, it could easily be a manipulated variable. If there’s something wrong with the Moon during some program upgrade, well, no one seems to want to look at the Moon for a couple hours. We wouldn’t even wonder about it. Or…just high clouds. Shrug.)

If we are in a sim, it’s quite an advanced one, because it does allow us to wonder if we’re in a sim. More primitive models probably lacked this capacity.

(I was mentioning all of this to a friend, and he said he thinks reality is manipulated. Not long ago, he heard a song on the radio, which he hadn’t heard in a good forty years. Then, shortly afterward, he heard it again. The proves manipulation…right?)

You’re still missing the problem: there is simply no objective sense in which the physical system you’re looking at implements a simulation of any thing at all, be it arithmetic or a FSA. This is all due to interpretation on the part of the user.

Again, I give you a physical system that returns a high voltage if and only if a high voltage is applied to both of its inputs. Does it implement an AND-gate? An OR-gate? Some other, arbitrary function of two bits?

Each of these answers is just as right as any other; it all depends on the meaning you assign to ‘high voltage’ and ‘low voltage’. What is being computed is not a property of the physical system implementing the computation, but of the mapping from physical states to computational states; and this mapping ultimately resides within the user’s mind. There is no computation without such interpretation, and the interpretation—and the interpretation alone—determines what is being computed.

Deciding whether something implements the AND- or OR-function is much simpler still, yet completely impossible—simply because no ‘right’ answer exists.

This is circular, and viciously so: in order for our computed mind to observe the computation, and make it thereby a computation of a mind, the computation needs to already be a computation of a mind—since if it’s not, there is no mind to do the observing. It’s the homunculus fallacy all over again.

But meaning—intentionality—certainly is a very important part of consciousness. If we can’t get meaning, we can’t get consciousness.

What is it that makes one interpretation wrong? Which interpretation of the input-output system is right: that which makes it an OR-gate, or that which makes it an AND-gate?

I’ve nowhere said that a teleported entity would not be conscious—in fact, I believe it would be. It’s just that that fact doesn’t entail that consciousness is, in some sense, informational—it’s not just information that is being teleported, but matter, concrete stuff with intrinsic properties.

The distinction between teleportation and modeling—or transferring mere information—is roughly the distinction between the set of John’s paternal ancestors and the set of books on his shelf: the information contained in the ancestor relation can be ‘teleported’ onto the books, but that doesn’t mean that Moby Dick becomes John’s great-great-grandfather. There are intrinsic properties not reducible to the relational properties encapsulated in informational structures; if that weren’t the case, then everything we could ever know of a set of things is how many there are, by Newman’s objection.

And any state of a computational system at any given point is just a sequence of characters, hence, it has no meaning without interpretation—while a state of mind, at any given point, contains lots of different meanings, without need of interpretation (again, positing a little interpreter in the head just runs down the homunculus regress).

Of course conscious aliens plausibly exist, and I’d test them the same way I ‘test’ other humans: I converse with them, e.g. The problem with a simulated consciousness simply is that there is no such thing: any simulation you care to present to me can equally well be considered to be a simulation of the detailed topography of my navel; it is a conscious mind exactly as much as it is my navel: not at all.

That’s not even close to any argument being made. What is being argued (quite explicitly) is that all computers are equivalent (in the limit of infinite resources) to Turing machines, that Turing machines are syntactic engines, that syntax does not suffice for semantics (a point, incidentally, I remember you making yourself on this board), and that hence, no computer can ever capture the semantic properties of a mind: a perfectly cogent argument.

It may seem cogent but it’s a stupid argument and one largely dismissed in mainstream cognitive science. But I want to get back to our discussion, and though it may be futile, I want to take another crack at it in response to this:

You seem hung up on this notion that computation is equivalent to “thinking about” or “describing” something, which cannot bring the described thing into being. This is fundamentally misguided because computation is a process that is capable of producing observable behaviors. Those behaviors are in many cases all that is necessary to define an entity such as a chess player or an able Jeopardy player or, arguably, a conscious being. That a particular syntax is necessary to derive this meaning makes no difference and applies equally to human communication.

Consider the case of a good chess playing program, something that is no longer controversial. As an aside, Dreyfus once argued that “mere” symbol manipulation could never produce a program that played with human-level strategic skill, but only one that basically obeyed the rules of chess and did simple evaluations. How wrong he was! Searle and Azarian are all in the same boat. But anyway, according to your position, someone observing the internal operations of such a program would have no idea what it was doing, and might make the interpretation that it was just computing the value of pi. The program has not “instantiated” anything, you claim. Yet by putting the appropriate interpretation on its symbolic output, we see that it’s playing excellent chess – we have derived meaning from the symbols and extracted genuine semantic value.

It’s another version of my argument against your monkey analogy – perhaps the monkey is just typing gibberish, but if we derive an interpretation of good writing that continues to be empirically supported, then by golly the monkey really is an excellent writer and yes, we should give him a book contract. What we’ve effectively done is understood its language. I don’t care to argue whether this interpretation of the literary monkey or the chess playing machine is objectively correct or the only possible interpretation, I would argue only that it fully meets the expectations of a particular behavior and therefore instantiates to the observer the entity that is defined by that behavior.

If the behavior in question is consciousness, and if it can be computationally instantiated in an advanced simulation, then as I said, we can expect to observe autonomous intelligent beings unable to determine whether or not they are in a simulation. We know they’re not real, we are capable of putting other interpretations on the simulation such as wiring it up to a light board to produce pretty patterns, but they do not and cannot know that they aren’t real for the following crucial reason: because in their world, from their conscious perspective, they’re as real as you or I!

To repeat a key point: if you grant that it is possible to use computational methods to create a conscious machine that interacts with the real world, then it’s impossible to avoid the corollary that we can then create one or many such machines that interact with a simulated world. It’s as “real” as anything else; I might come back in a year of sim time and find that one of my creatures has written a book of philosophy ruminating with considerable angst about the nature of its existence. If this is all a chimera, where did the book come from?

When you or I do arithmetic, we are subject to the same constraint - after all our mental image of the world is very different from the actual world. So I don’t see the difference between the real world and the simulated world in this context. Now if you are an extreme skeptic and claim that we can’t know the real world you certainly are consistent in saying there is no real simulated world either.

There is insufficient evidence to determine the function.
Let me give you a real world example. When you are testing a chip, what you are basically doing is putting an XOR gate at the output of the perfect chip (a model) and the real chip, which may have defects. If the output of the XOR gate is a 1, then you have proven that the circuits are different. We have a concept called coverage which says how many of a certain set of defects have been detected by a test. Putting a 11 into the OR/AND gate does not provide enough coverage to show it is an OR or an AND or one of these with defects.
Provide an exhaustive set of inputs, and then you do know. One of the things security people worry about is the ability of a user to reverse engineer and chip if you have access to the combinational logic - so you might think this is an impossible problem, but it is not hard to do - though it is NP hard.

Again, a gate is defined by its truth table. You may call a tail a leg, but it is still a tail. If you misimplement something, you can prove that the bad cell does not map its truth table. This verification is done all the time during computer design.
So, unless you contend that we don’t know anything, this is nonsense. We of course have to agree on our terminology, but I hope this isn’t do you see green when someone else sees red problem.

My friends at Intel would be amused at this statement. You can’t decide whether 1 + 1 = 2 if you reject the axiomatic definition of arithmetic also.

No, our mind computes things unobserved - at least mine does. And Damon Knight’s. And this unobserved computation of mind interacts with the outside world - like when we are driving without thinking about driving. So all you have to do is establish an input that observes our thought. It is certainly circular, in a sense, but feedback loops exist all over computation. It is the mind observing itself, or rather part of itself, so no homonuculus required.

Non-conscious things have intentions also. If you don’t believe me, meet my dog. As for meaning, you’ll need to define what you mean by that. (Hah!) Even primitive creatures assign meaning to a certain sense of stimuli, such as this is good to eat or that is good to get away from. We can go from there all the way up to philosophical meaning. I doubt that the first conscious proto-humans were studying philosophical meaning.

The definition of each - duh.
I’m just flashing on what would happen if the extreme skeptics from my Theory of Knowledge class got plunked down into my Logic Design class. And answered a question on a problem set about what outputs will result from certain inputs for a given circuit with “we can’t ever know for sure, since the truth tables you gave us have no real meaning.” They’d last about a femto-second.

I gave the definition of what my teleporter does. And what information are you talking about beyond the information sufficient to build an exact replica? We can compute the information content of the body being teleported and show that only this information is sent on the channel. How are you sending additional information? Which channel are you using?

Forget books - the box on the family tree for John’s great grandfather is not John’s great grandfather either. Whether an exact copy of John’s great grandfather is John’s great grandfather is another matter. And, to get back to the point, our simulated conscious entity is not a copy of anything.
I read all 80 zillion pages of Korzybski’s Science and Sanity in high school (to understand what van Vogt was getting at in World of Null A) so I’m quite aware that the tag is not the thing.

First, you are again ignoring the next-state function. Second, I’d strongly disagree that our internal mental state does not need even internal interpretation. If you ever experienced someone interpreting their internal mental state very differently based on certain drugs - both recreational and anti-psychotic, say - you’d know this.
Hell, tiredness does this also.

I trust that you don’t talk to your navel - unless you worked for the Office of Navel Contemplation with Tom Lehrer that is. But if you talk to a simulated consciousness, and it answers in the same way as the alien, why treat them differently. Oh, I forgot to tell you that the alien was actually a strong AI in alien skin.

I suspect I made that argument with reference to the Chinese Room problem which has no state and no semantics. Computers do semantics quite well - writing a compiler involves dealing with both the syntax and semantics of the language. They are not the same, and are in fact dealt with in different parts of the compiler. So Turing Machines can handle semantics. In fact language understanding and translation must involve the understanding of semantics also.
So saying that Turing machines cannot do semantics is just plain wrong. Since the argument is the same whether you talk of computers or Turing machines, the only possible reason I can imagine him bringing Turing machines up is an attempt at a reductio ad absurdum of something with box and a tape being intelligent. It is kind of like calling computers nothing but big adding machines. Which is ignorant since it ignores programability. So sorry, cheap debating trick by someone who has read about but who doesn’t really understand what a Turing machine is.
Why have them? Because they make it much easier to prove theorems about what you can and cannot compute.

BTW, to get back to the point of the OP, if one denies the possibility of simulated consciousness you’ve pretty much defined away the possibility that we live in a simulated world. My objection to the simulated world hypothesis assume that simulated intelligence is possible, but that practical considerations make it extremely unlikely that the large number of simulated worlds required for the hypothesis exist.

What you’re describing seems not-inconsistent with the ‘many worlds’ interpretation of QM.

Well, if that’s the case, it seems it should be easy for you to rebut it, rather than just scoff and appeal to authority… A dubious authority at that: after all, even if he’s seemingly not famous enough to count for you, Azarian is a cognitive scientist, who does make that argument.

But that behavior is entirely due to the observer’s interpretation. Under different interpretation, behavior differs—that’s where the analogy to a description lies: a description only describes a certain think if it is understood—interpreted—in the right way; under a different interpretation, it describes something else entirely.

There is no such ambiguity about the things themselves; hence, they differ from both simulations and descriptions. Moreover, minds can’t depend on interpretation, since interpretation is itself a mental act, and thus, this posit runs into the homunculus fallacy.

And with a different interpretation, that same symbolic output would correspond to, e.g., playing go, or phrasing answers in the form of a question. So what does the program do: play chess, play go, or play Jeopardy?

The correct answer is, of course, none of the above: it manipulates symbolic structures, which can be interpreted in a certain way so as to pertain to chess moves, go, or Jeopardy. Nothing about those symbolic structures has any intrinsic connection to chess, go, and Jeopardy.

The problem is, though, that there exists infinitely many such interpretations: under one, he’s Shakespeare, under another, he’s Hemingway! So, which is the monkey’s ‘right’ language?

Also, even if you haven’t found a code decoding its output to something meaningful after time t, it could be that a code you’d already dismissed yields something meaningful at a later time—in general, a code is just a mapping between plain- and ciphertext, and a quite arbitrary one at that (as long as you’re not worried about Shannon optimality and the like).

But only to some particular observer. It’s again as with descriptions: they mean one thing to one observer, and another thing to a different one. But having minds be observer-relative leads, as I hope is clear by now, to nonsense: how is the state of mind of the observer supposed to be fixed, if only a certain state of mind fixes the interpretation of a computation?

But they only exist—if they do—as long as we apply the right interpretation to the computation. Do you really think that a world could exist only as long as somebody is watching the monitors in the parent universe? Because that is just the same thing as calling a being into existence through just reading a description—in some magical way, the exalted gaze of the observer confers Being to his interpretation of a bunch of symbols. To me, that’s a reductio ad absurdum—but of course, one man’s absurdity is another’s profound discovery.

Well, I don’t believe that it’s possible to use computational methods to create a conscious machine; as I said above, I believe it’s possible to create artificial minds, but their consciousness does not derive from computations going on in their heads—it can’t, because computation needs interpretation, which needs a mind, which means that you’d need a mind to interpret a computation as computing a mind.

Even if that weren’t the case, though, your assertion wouldn’t follow: it’s very well possible that a computational mind in contact with the real world might be ‘grounded’ in some way, but loose this grounding when thrust into a virtual, interpretation-relative world.

From your choice of interpretation, naturally, since if you chose a different one, there would be no such book; and yet another one would include the completed works of Shakespeare, etc.

I’m certainly not a radical skeptic—I am, after all, arguing for a difference between real and simulated worlds, which would make no sense whatsoever from a skeptical perspective.

I gave the complete characterization above. Let me repeat it (l: low voltage, h: high voltage, I1: first input, I2: second input, O: output; again, as I pointed out above, let me just mention that tremendous amounts of interpretation have already gone into making these identifications—but we’ll take all that for granted, it comes for free!):



   I1   |   I2   |   O
-------------------------
    l   |    l   |   l
    l   |    h   |   l
    h   |    l   |   l
    h   |    h   |   h


So, is this an AND- or and OR-gate? There is, of course, no right answer—it could be both, depending on whether you make the interpretation “l = 1, h = 0” or “l = 0, h = 1”.

In fact, it can be any Boolean function of two variables: a general interpretation takes each row of the table above to an assignment of values. That makes the interpretation a little more complex, but that’s not an obstacle. Hence, what is being computed by a physical system is wholly and solely determined by interpretation.

This carries through to more complex systems: any computation can be described as a Boolean function taking m variables to n variables—in fact, it suffices for n to be 1, but let’s take the slightly more intuitive case right now. Using the above method, I am free to choose interpretations for all of the logical gates (in fact, I only need to fix an interpretation of the inputs, which I can then leave fixed as the values propagate through the network).

Hence, a Boolean network with m inputs and n outputs can be interpreted as implementing any Boolean function from m to n variables: what is being computed is completely left open, if we only regard the computing machinery itself. (Needless to say, the point applies just as well to all other models of computation.)

Computation is not in the computer: it’s in the user.

Only because the chip must fit into computational architectures already on the market, and thus, its inputs and outputs come with a fixed interpretation; if you didn’t know that, then it would indeed be hopeless, as I hope is made clear by the above discussion.

There is no such thing as unobserved—or rather, uninterpreted—computation. That doesn’t mean you have to do real time observation; but you do have to, at some point, map the symbols—syntactical entities—to a meaning, imbue them with semantics, use them as representation. At some point, you have to say whether ‘l’ means ‘1’ or ‘0’.

Without observation, there is no mind there that can observe itself. The observation is logically prior to the mind’s existence—without it, there’s nothing that is being computed: low voltages are just that, low voltages. Hence, the mind needs to exist before there is any computation, and thus, if the mind is the result of the computation, the mind needs to exist before the mind exists. A contradiction.

Sorry, I was using ‘intention’ in the slightly skewed philosopher’s way of referring to something like ‘the content of your thoughts’ or ‘what your thoughts are directed at’, etc.

That they react to stimuli is no indication that those stimuli have meaning. A wall reacts to a sledge hammer by caving in; doesn’t mean the sledge hammer had any meaning to the wall. (And meaning, in the sense I’m using it, is that particular relation that stands between a representation and that which it represents.)

Yes, well, I presume you can cash in on that ‘duh’ now and tell me if my example above is an AND- or OR-gate…

I’m not sending additional information; I am building a human being out of, presumably, physical stuff, which has properties beyond those of the information about physical stuff. An architect’s drawing is not a house you could live in: you need physical stuff to make one. Again, if all that is to the physical is information, then everything we could ever tell of any set of things would be how many there are.

But then, what does the interpreting? You’re again positing a homunculus (the one sound, solid death knell for any theory of consciousness that has so far been found).

Those aren’t different interpretations of the same mental state; they’re different mental states.

And if you think that the possibility would somehow shock me or argue against my stance, then I think you haven’t been reading carefully enough: I have no problem with conscious machines. I have a problem with consciousness being due to computation, since computation is only that particular computation thanks to interpretation, and hence, positing that consciousness is due to computation lands you into a vicious circle.

If I remember correctly, it was while discussing Gödel, Escher, Bach—someone suggested that Hofstadter in it made an argument that one could get semantics from syntax, and you thought that unlikely, based on the fact that you didn’t remember throwing the book across the room in frustration (or something along those lines). I haven’t been able to locate the post, though.

All of the semantics in this case, though, is imposed onto the computer by the one programming the compiler, for example. Also, semantics in this case doesn’t quite mean what I mean when I use the term—you essentially interpret the manipulations carried out on whatever lowest level you’re considering if the computer is handed a certain instruction, in order to map them to instructions in a different language.

But what I mean is ultimately what the symbols of some computation correspond to—what is being computed. A computer has no access to these semantics: it takes an input string, and outputs a different string; the meaning of the string has no influence on the operations it carries out on it—those are dictated purely by the syntactic properties of the string. In this sense, Turing machines, and all computers, only know about syntax, and not about semantics.

A cheap debating trick that, somewhat fittingly, only exists in your interpretation.

Well, the MWI assumes that a mental state supervenes on the appropriate physical state, thus giving rise to different experiences (in different ‘worlds’). If it were really the case that any physical system implements any computation whatsoever, then this would be wrong: a given conscious state would supervene on a rock, a teapot, a cloud formation, basically anything; and, in fact, any conscious state would supervene upon each physical system. So, this mental/physical supervenience would be destroyed, and we could basically just get rid of the material world, and physics, altogether, seeing as how we’re probably just a figment of a rock’s imagination.

If you’ve not read * Permutation City* by Greg Egan, I highly recommend it.

The premise is almost exactly that. The universe accidentally computes every possible world using the same matter and energy. It turns out that the arrangement of matter, energy and time is just a point of view

Yes, Permutation City is awesome… So much so I already recommended it to wolfpup in this very thread! :wink:

Actually, I mentioned upthread that I planned to read because it was mentioned in the New Yorker article and sounded interesting, and HMHW highly recommended it! :slight_smile:

ETA: Bah! Ninja’d. For some reason I didn’t notice HMHW’s latest post.

I thought I already had, and was just re-emphasizing that fact. But I’ll try to further support my view below that Azarian is not only “not famous”, he’s an idiot.

(Emphasis mine.)
How is there “no ambiguity about a physical object”? If there is something about a physical object that gives it an independent existence outside of our observation of it, how do we know this? Aside from this irresolvable metaphysical dilemma, doesn’t quantum mechanics tend to falsify this notion anyway? No matter. My real issue here is that you appear to be asserting that our perception of the physical world doesn’t require interpretation. This is absolutely not true. Not only does it require it, but indeed the same comments you’re making about the “interpretability” of computation applies exactly to the physical world: we interpret what we see (and hear, and feel, etc.) based on years of accumulated experience that creates a particular mental model of the world. Someone lacking that model, like a person blind from birth enabled to see for the first time at an older age, would have to learn those interpretive skills from very basic principles. This is no mere argumentative technicality – someone without the correct learned interpretive faculties would literally not be able to survive, even if – as Pinker and others have posited – we are genetically endowed with “mental modules” that facilitate such learning.

Moreover, exactly like the interpretation of the chess-playing program’s symbolic outputs, only one of many possible interpretations maintains empirical consistency. We like to call these empirically consistent interpretations “reality”. There really is a “right” interpretation from that standpoint, and the others are nonsensical. We cannot plausibly interpret the output of the chess program as also being a program that is playing Go or composing poetry and have that empirically confirmed by future behavior – or if we can, then the program really is doing all those things.

Turing himself was a proponent of the idea of machine thinking and invented the term “mechanical rationality”. Computational theories of mind have been refined since then (and since Putnam’s original hypothesis) and today embrace ideas like Fodor’s representational theory of mind in preference to (or more accurately, in addition to) classical machine-state functionalism of the Turing kind. Fodor posits that thinking takes place in terms of mental symbols that he calls the “language of thought”, and he argues that mental activity involves Turing-style computation on this representational mental symbology. The fundamental idea that has come to be widely accepted, if still controversial, is that mental activity is syntactically structured and can be described in terms of Turing-type operations. Many highly regarded cognitive scientists support the view that Turing-style computation on mental symbols is the best foundation for scientific theorizing about the mind.

Granted, what I’m supporting here is the computational theory of mind, not explicitly the creation of artificial consciousness, but it seems to me to be part and parcel of the same thing, with the latter an emergent property of the former.

On the other side of this, statements presuming to relate human cognition to biological processes are fundamentally misguided – what is it about a “biological process” that cannot be computationally modeled? Moreover, what is it that decrees that the mind must be modeled at all in order to achieve the required level of cognition or sentience, as opposed to different and perhaps superior computational methods? And finally, statements about consciousness needing to be grounded in a “real” world are equally misguided – what does “real” mean when all we do is process sensory inputs and activate motor responses? Reality is indistinguishable from anything else that provides those inputs in sufficient detail and conforms to our mental model of the world.

The arguments that purport to ridicule computational consciousness or the simulation argument, or even computational cognition, as some kind of chimeral “magic” fail because fundamentally that’s what they’re arguing themselves – that “real” thinking, “real” consciousness and “actual” reality are different than the computational kind because biological systems and “actual reality” are imbued with some kind of inexplicable magic, like a soul or some other unfathomable mystery!

And I disagree with that, too. See, the point I’m trying to make here is that for all of your assertions that the simulation doesn’t actually instantiate either a world or any beings in it, if I created such a world with a suitable advanced simulation and one of the sentient beings wrote such a book, I could in theory access it, copy it over to the “real” world, and maybe have it published and have it praised as a great work of philosophy. It would be an actual thing, somehow magically emerging from a computational abstraction.

What does the word “supervene” mean? I don’t know this one from physics.

Sorry - I’ve been dipping in and out of this thread and must have missed that. It’s one of my favourite books. I really hope you like it.

You and me get handed the same physical object. You and me will eventually converge on agreement of what this object is, by using a method of hypothesis and falsification (of course, in most cases, it’ll be obvious). This isn’t possible with a computer program: both of us can intelligibly maintain distinct interpretations, without either of us being anymore right than the other. You can interpret the physical system I described above as an AND-gate, I as an OR-gate, and both of us are equally right. If you describe a physical object as a chair, and I as a whale, one of us is wrong.

The difference is that with respect to the physical world, there are objective facts to be discovered. With respect to computation, all the facts come from the interpretation.

There is a right and unique interpretation for the physical world; there isn’t one for computations—see the above proof that any Boolean network taking m inputs to n outputs can be interpreted as implementing any Boolean function from m to n variables. This captures all that can be done computationally, as Boolean networks—of arbitrarily many inputs—are computationally universal. Hence, every computation can be interpreted as any other computation—indeed, that’s exactly what’s done with, e.g., emulators: we use one physical architecture (say, an x86 PC) and interpret it in such a ways to function as another (say, a C64).

I can explicitly write down the interpretation that makes a Boolean network compute any function you desire.

Yeah, and many others don’t. In earlier times, many supported the idea that the mind worked like a mill, like a hydraulic system, like a telephone switchboard, or whatever else was the hot topic in tech those days. These days, more and more begin the move towards quantum computation already.

The issue is not how many people support the orthodoxy at any given point, or how fervently they do so, but how well their arguments stack up.

I’ve answered that (although I don’t see anything fundamental to biology at all)—computation models structural properties, not the intrinsic properties that actually make stuff the stuff that it is; a simulated mind is not a real mind for the same reason that the set of books is not the set of paternal ancestors, an orrery is not the solar system, and so on.

Think about sheet music: in order to become actual music, it needs to be played, on physical instruments; real horns must be blown, real pianos played, real air needs to vibrate. Sheet music is not the symphony; a description is not the thing it describes; a simulation is not the thing it simulates.

Certainly. But still, it must be those inputs, unequivocally; in a simulated world, the simulation could be interpreted as yielding those inputs to a mind; but it could be interpreted as a long recitation of The Rhyme of the Ancient Mariner.

I’ve given you an explicit example of a theory drawing only on information theoretical notions, in which a simulated mind fails to be conscious (Integrated Information Theory).

Yes—thanks to your choice of interpretation. You can also choose to interpret ‘blaargh’ as the complete works of Shakespeare, publish those, and get rich. I trust you will not be misled to think that ‘blaargh’ is the works of Shakespeare by that; yet, the simulation issue seems to make you believe exactly this.

In the end, it all just comes down to this: there are properties of physical systems that aren’t transferred to models of those systems—thankfully enough, as otherwise, modeling would loose all its purpose, since the model systems would just become the systems we intend on modeling. Those properties pertain to the structural facts of the system to be modeled, the relations in which its parts stand to one another: if we build an orrery, we place its parts such as to emulate the distance relations between the planets, moons and the sun.

Now, nobody is going to try and send a rover to one of the little pieces of an orrery: because it’s clear that the thing that stands for Mars, isn’t actually Mars. There are further properties that make one thing a part of an orrery, and the other a planet in the solar system. Those properties don’t make the transition from the system to the model—they’re like what is lost from the music when it’s not played, when it is just inert sheet music, which is an informational, hence structural, model of the real music.

Note that what structural properties a model instantiates is always a question of interpretation. Again, think about how I can use the books to stand for the line of paternal ancestors, or the ordering of richest persons in the world. Or, think of how we can re-interpret an OR-gate as an AND-gate, or any Boolean gate, really.

Computers are so good at what they do because they are capable of embodying any structure under a suitable interpretation—they are universal modeling systems, and you can model the solar system as well as the financial market or the weather with them. But a computer doesn’t thereby become a solar system, anymore than an orrery is. The reason we know that is precisely the freedom of interpretation: we can interpret the computer as modeling a solar system; but we can’t interpret a real solar system as being anything else but that. We can, of course, use it to model other systems—with a clever enough interpretation, we can use it to model the financial market, for instance. But it’s still uncontroversially a solar system—that’s what you, me, and aliens from Tau Ceti IV will discover when we examine it—not a financial market.

Furthermore, structural properties can’t be all that there is to the things around us—as I already pointed out, all that structural properties allow us to settle are question of cardinality, and I have the strong impression that we can say more about the things in the world than just how many there are.

Hence, there are properties other than the structural ones—intrinsic (but perfectly physical) properties, that make a solar system a solar system and an orrery an orrery. And those properties do not transmit to models of a system—that’s what makes a model a model, and not the system it models.

But then, a model of consciousness is not a consciousness; it’s a model of consciousness, and nothing more. Thinking otherwise is just to confuse the map for the territory, sheet music for a symphony, and an orrery for a solar system.

A mind isn’t, and can’t be, the sort of thing that needs to be interpreted to be what it is—otherwise, we are led to immediate vicious circularity; but everything only instantiating structural properties needs to be interpreted. Hence, computation can never be sufficient to produce a mind.

It’s a philosophical term: A supervenes upon B if whenever you change A, necessarily B also must change, but changes in B may leave A invariant. Think about a set of pixels and the picture they produce: if the picture changes, necessarily the pixels must have changed; but you can change a few pixels here and there without making much of a difference to the picture.

Many worlds assumes that a given physical state yields a given conscious state, and change in the conscious state means the physical state must have changed; hence, under a view that makes what consciousness a physical system gives rise to dependent on the interpretation of that physical state, supervenience is violated.

Err, I messed up that bit: the structural properties are the ones that do get instantiated in the model of a system.

Now this is just stupid. We are talking definitions here, not interpretation. If you for instance decide to call a l 0.001 volts and a h 0.002 volts, you’ve described a wire - no Boolean function at all. l and h are just tags for ranges of voltages. Back when I did TTL a high was 5 volts, not it might be 1 volt.
Now, I could just as well say that I can redefine the meaning of the letters in your response above to mean “I am totally wrong, and you are totally correct” - but any communication between intelligent entities involves agreement on definitions.
Interpretation can play a role at a higher level. If I present you with a painting of two coke cans, you should be able to say that it is two coke cans, not two horses. You can interpret it as a rage against sugar and commercialism or for it, but that uncertainty is because there is no definition of what the picture of the cans means in this context. When you see a Coke can in the store you probably know what it means, but only because of laws and common sense - it could have orange soda inside.

Totally incorrect. You left out the notion of time. Computing is not combinational logic, which is what you just described. It involves sequential logic. Now, you can try to get around this by unrolling the logic and making copies for each time frame in the computation. But since, as you know, we cannot guarantee that all computations will terminate, you may have an infinite number of inputs and outputs.
This concept is extremely important in my field of endeavor. It doesn’t really affect the incorrectness of your argument, but you understand this area not quite as well as you think you do.

If no one sees a result, is the computation performed? Clearly the meaning of a computation involves agreement between the user of it and the programmer who wrote it. But what about computations an observer would be unable to do?

But the “interpretation” (really the definition) is fixed before the architecture is designed or implemented. Same with logic. You don’t experimentally determine what a low or a high is, and you don’t establish this by consensus, and you don’t vote on it. Transistors and detailed design is done after you decide on this.
BTW there is a range, and modern microprocessors in advance systems are tested to see what is the minimal voltage which will give the desired speed. This is not interpretation - this is based on testing and the value is saved to be used by the system. The definition is in the Efuses in the processor. If you decide this is just interpretation and change voltages, what you get is not a processor doing the work defined in its specifications.

A misunderstanding. A light sensitive single cell animal observes with no mind. I go that far back since I don’t know if you think a dog has a mind or not.

My code produces several web pages a day (mostly revised) most of which nobody ever looks at. Is my code doing computation? If, 12 hours after my code writes the page, someone looks at it. Is the computation being done then? Or when cron ran my job? If you measure the power and resources consumed, there is no difference between an observed computation and an unobserved one.

Just shows you don’t really understand what is beneath these all that well. When I was an undergrad and looked at signals I sent to the logic I built, I saw nice clean 0 volts for 0 and 5 volts for 1. In a nice square wave. Today, nothing is that clean. Plus, real gates are not just 0 and 1. In some cases we need to worry about what happens when a voltage between the assigned 0 and 1 arrives. We basically don’t know what happens, and it isn’t important, so there is another row and column for the full truth table saying that if a Z (high impedance) arrives at an input for an AND where the other input is 1 you have an X (unknown) at the output. What that X really is does not depend on interpretation but rather on implementation.

Send a picture of the person and you won’t get the person either. The transporter by my definition sends physical information - both the location of every atom and the atom or a duplicate itself. So you are not responding to the example at all well.

No I am not. Is my subconscious mind a homunuclus? I’d think not. That is there already - I’m just positing that consciousness is just establish a feedback path so our pre-existing subconscious mind (which animals clearly have) can observe itself. I’m not saying this is what really happened, just that if it did we can explain consciousness without additional observing minds.

How would conscious machines be conscious without computation? This makes no sense at all. (I assume we’re not talking about the fairy from Pinocchio here.)

I actually kind of remember that. I’d stand by it. You clearly can’t have semantics without syntax, since you couldn’t understand the pieces of the language, but you don’t get semantics from syntax.

True for compilers, which are relatively simple. But there are some AI programs which understand - in a sense - the semantics of stories. By understand I mean are able to answer questions about the reason people did things in the story. These semantics are not programmed in, but computed by the program.
I rather suspect WATSON can compute semantics also, since I know by experience that you can’t answer Jeopardy questions solely by looking at the syntax, and you certainly cannot produce a file of semantics of them.

Well, since he did not label the paragraph “cheap debating trick” I’ll agree. And that it isn’t only exists in your interpretation.

This is totally wrong - and I wrote several emulators in grad school as part of my research. You seem to be saying that all Turing computable functions are equivalent - which is nonsense.
There are two ways of creating emulators. One is basically simulating the second computer - but this requires a level of software on top of the existing architecture to create a virtual machine. The other way is done with microprogrammable architectures, where you replace the firmware for instruction set architecture A with the firmware for Instruction set architecture B. In this case the instruction level machines are both virtual machines in a sense, but while the microarchitecture stays the same the first instruction level architecture has disappeared.

BTW you cannot make a boolean equation of n variables into one which requires m > n variables. Certainly a large enough network can implement all functions of n variables - but that is not the same thing. And certainly not the same as saying an AND is an OR given the implied definitions of 0 and 1 or low and high.
BTW, you should use NAND, since you can implement any boolean function with just NAND gates, but not with just ANDs and ORs.