The key point is that the conscious entity that does what you posted (“create a virtual model of another computing device and query that”) can not itself create consciousness using that same computing technique, because that introduces the infinite regress.
The conscious entity can of course do what you are saying, that is how we make use of calculators etc., we map the abstract computation we want to perform to the machinery+symbols.
Yes, you’re right. It’s actually a bit of a dodgy example, in that if you knew the full relational structure all the actors have in a film, you’d likely be able to uniquely identify the film, and with it, its actors—but that’s because you’ve already been given the domain over which the relations are supposed to be implemented, namely, that of films.
In my defense, those examples were merely supposed to give an overall ‘feel’ of the Newman problem; introducing it formally would’ve meant adding yet more pages to an already pretty long essay.
Well, it’s computation if it’s interpreted as computation. Without such an interpretation, it’s just a bit of physics; the computation happens in the abstract space that the physical states of a given system are mapped to via the representation relation.
In principle, every cell implements the von Neumann mechanism—the ‘tape’ being the DNA, the ‘constructor’ being the various proteins coded for, that themselves also duplicate the DNA. But that doesn’t mean that a single cell, or a paramecium, is conscious—the von Neumann replicators within the brain of some agent comprise perceptual content (in a wide sense that includes, e. g., past experience), which is brought within conscious experience by the self-reference of the von Neumann process.
That’s perhaps made more clear by comparing it to another approach to conscious experience, the so-called Higher-Order Thought (HOT) model. In this model, any thought is conscious if there’s a higher-order thought that is ‘directed at’ that thought, has it as its content, or is ‘about’ it in some way. The von Neumann process then enables a ‘thought’ to be its own higher-order thought, having itself as its content; only then does it lead to conscious experience.
In my view, it is having that model in mind that is constitutive of the computation that the device is performing; without having such a model, there wouldn’t be any computation. This is as in the abstraction/representation-account: without some agent holding some theory of a system by which to furnish a representation relation, there’s just no computation going on.
Given this, as RaftPeople has pointed out, the ‘having a model in mind’ can’t itself be computational—for in that case, you’d be appealing to a capacity to explain it, because in order for your brain to implement the computation that leads to you ‘having a model in mind’, some other entity would have to have a model in mind of that, and so on.
Hence, this ‘having a model in mind’, for the regress to bottom out, can’t be computational.
I finished the paper this morning (I’m a slow reader on these types of things). The early part with the instantiation relation and implementation relation and model and physical stuff, all of that makes sense.
The later part with the von Neumann replicators requires a second slow read. The linkage between the non-structural attributes and the model is currently just a hazy notion in my mind, and the example of the self-referential replicator only helps in a very abstract way (i.e. just like this weird thing that can happen (self-replicator), there is another weird thing that can happen also (non-structural attributes linked to model somehow)), it’s not concrete enough for me to really understand or even ask a good question (yet).
“At a conference a while back a poster presentation described a neural net that used only pulse width modulated wave forms. No numerical operations were performed but the system components produced the same results as their hardware and software equivalents. Is this computation?” Crane#79
“Well, it’s computation if it’s interpreted as computation. Without such an interpretation, it’s just a bit of physics; the computation happens in the abstract space that the physical states of a given system are mapped to via the representation relation.” HMHW#82
OK - then didn’t you just define computation as ‘the process of mapping information from one system to another using a representation relation’? The ‘process’ in this definition is structure independent so doesn’t that make these bits of physics computational.
Consider a see saw with 4 occupants - an adult on one side and three children of different sizes on the other. With the board as a datum line the children can space themselves to exactly balance the adult and the board will become level. This can be illustrated as (mapped to) the membership functions of a fuzzy logic problem or the terms in a center of gravity equation in Excel. Excel is numerical, Fuzzy logic graphic and the see saw is a physical analog computer. Th
“Well, it’s computation if it’s interpreted as computation. Without such an interpretation, it’s just a bit of physics; the computation happens in the abstract space that the physical states of a given system are mapped to via the representation relation.” HMHW#82
OK - then didn’t you just define computation as ‘the process of mapping information from one system to another using a representation relation’? The ‘process’ in this definition is structure independent so doesn’t that make these bits of physics computational.
Consider a see saw with 4 occupants - an adult on one side and three children of different sizes on the other. With the board as a datum line the children can space themselves to exactly balance the adult and the board will become level. This can be illustrated as (mapped to) the membership functions of a fuzzy logic problem or the terms in a center of gravity equation in Excel. Three models of the same thing that yield the same result.
So, for this discussion do we assign the term computational to all three?
I too am having difficulty with the von Neumann replicators.
The Paramecium replicates itself by recursion - it uses itself for all the material to copy itself. It is completely self referential to the point of being immortal.
Can’t all the arguments used here also describe the human brain? It is a machine, built of cells and using neurons to run computations. When certain electrical inputs from their sensory organs trigger the brain, it processes the information in clusters of neurons, then passes electrical output to limbs in reaction. It is an incredibly complex machine, far out of our current capacity to build from scratch. But we have already made enormous progress integrating with this machine. We can use digital cameras to create our own electric signals, in a format that the brain can use. We can build prosthetics that read the brain’s output signals.
That part unfortunately isn’t totally self-contained, see the two other papers I linked to above (if you can’t access either of those, the second one was also an entry in the 2016 FQXi essay competition, see here).
I think the basic gist to get is how the self-referentiality of the von Neumann replicator underlies my account of intentionality—the ‘aboutness’ of mental states. This has two parts—the evolvability of the design, I suggest, leads to replicators becoming adapted to the ‘landscape’ set up by sensory data (essentially, the pattern of excitations in the brain), which means that it comes to representing it in a sense—think again about the dolphin’s streamlined form representing its aquatic surrounding. For the dolphin, ‘decoding’ that reference needs an external agent, but the von Neumann replicator has access to its own shape, so to speak, and can draw arbitrary conclusions about it (mathematically, it can prove certain things about itself).
Hence, such a replicator becomes a ‘self-reading’ symbol standing for some sensory data, and with that, the environment. That’s what I mean above by the analogy to Higher-Order Thought theories: it’s ultimately its own higher-order thought, which, under such a construal, would suffice to bring itself to conscious attention.
However, the above still remains at the structural level. But the Gödelian phenomenon shows that the structural description is really a kind of shadow of what goes on in the structure-transcending—the Gödel sentence says of itself, informally, ‘I am not provable’. We can interpret this, using our knowledge of what this is supposed to mean, and deduce that the sentence must assert a truth—but this requires reasoning outside of the system, so to speak. In the same way, reference in the von Neumann construction might be thought to require similarly external reasoning, after all.
But in the non-structural, the Gödel sentence is either true, or false, depending on which model of the Peano axioms is used. So the truth of the Gödel sentence doesn’t require any external reasoning; merely inferring it does. Then, the idea is that this is the same for the von Neumann replicator, which is in an important sense equivalent to the Gödelian construction.
However, you’re right that this needs some fleshing out, and in the end, I kind of hedge my bets in the article—another option would be to fall back to neutral monist or panpsychist ideas, simply asserting that structure-transcending properties have the requisite experiential character to play the role of ‘structure-bearers’ in conscious experience. And in a way, it’s clear that one can’t do very much better: constructing a model (or theory) of precisely how these properties underlie phenomenal experience would mean to try to compute the non-computable, as to me, model-building is essentially a kind of computation (or the other way around). So there’s unfortunately always going to be a kind of lacuna in our understanding, here—the only thing that’s gained then being that this lacuna doesn’t signal a breakdown of materialist/physicalist ontologies, as is often asserted.
I am, however, working on a somewhat more formalized theory of how von Neumann reference works. The key to this is Löb’s theorem, and in particular, the notion of modal fixed points. Essentially, the idea is that one can talk formally about what a system can prove by means of a certain kind of modal logic, known as ‘provability logic’. In this system, the modal operator ‘□’ is interpreted as ‘is provable (in some formal structure)’, and it can be shown that for every formula F(x) with one free variable x, there exists p such that p <-> F(□p), that is, p is a fixed point of F, if it is ‘modalized’ (brought under the aegis of the □-operator).
The Gödel sentence is then a simple example of such a fixed point: p <-> ~□p (where the tilde denotes negation)—i. e. the sentence p is equivalent to the assertion ‘p is not provable’, and is a modal fixed point of the negation operation F(x) = ~x.
If that sort of thing happens (and it does happen with von Neumann replicators, which, recall, can prove stuff about themselves), we can in fact eliminate p, ending up with a formula just involving logical constants (truth and falsity)—in that way, Gödel’s sentence can be shown to be equivalent to the proposition asserting the inconsistency of a system (which is just □┴, where the inverted T, ‘┴’, stands for ‘falsity’)—which is nothing but Gödel’s second incompleteness theorem.
So I hope to eventually be able to use this machinery to cook up what a given von Neumann replicator refers to, by using it to supply the values of ‘x’ in the formulas of my paper—remember, those are what ‘ties down’ any given structure to a definite computation (or model). But that’s still gonna require a fair bit of work.
No, that’s not the process of computation, itself, but rather, the process of implementing a computation—that is, connecting a physical system with some formal object (the computation). The computation itself is then the image of the mapping in the abstract space, such that the system, as it evolves, implements each of the computational steps (under the representation relation).
Computational are only the images of the mapping—that is, you have the physical system (the people on the see saw), which gets mapped to two different computations, the fuzzy logic problem and the center of gravity calculation.
Again, yes it does, but the paramecium isn’t sensory data, but just a paramecium; hence, the von Neumann process in this case doesn’t induce any kind of ‘other-directedness’ or ‘aboutness’, and hence, no mental content.
Yes, the human brain is exactly the target of my arguments, which then show that while it’s a machine built of cells and so on, it doesn’t actually do any computation. What it does is entirely physical, and the physical only becomes computational by being suitably interpreted. Hence, consciousness can’t be computational.
Think about a much more simple kind of machine, say two gears with different ratios. In turning one, the other will rotate at a speed given by that ratio; but it would at best be an irrelevant gloss to say that the assembly ‘computed’ that rotation speed by taking the gear ratios as ‘input’. Yet every time we speak about the brain—or anything—as computing, we do it in exactly this sense: as a gloss on physical processes. This may be useful, to us, in particular, when we use this gloss to solve certain problems, as in using a calculator to compute sums, but it’s a confusion of the map and the territory to say that because we can use a system in this sense, this computation is what it does.
What it—the brain, that is—does is rather to set up a certain pattern of neuron excitations, which, if my model is even remotely accurate, will instantiate the von Neumann process (as to how, precisely, I have some ideas—essentially building on the ‘active blackboard’-theory of David Mumford—but like the above discussion of reference, they need to be fleshed out more fully).
Perhaps I should also note that the von Neumann replicator-part isn’t really a key focus of this paper. What I intended to do in the paper was really two-fold—one, to provide an account of what it means to implement a computation, and two, provide a rebuttal to the classical antiphysicalist arguments, such as the zombie- and knowledge-arguments.
The idea is that the former necessitates a commitment to non-computational features of the mind, which, if they play a role in experience, then allow an answer to the latter—zombies are consistent, because the non-computational properties can’t be derived from the computational ones, but that doesn’t mean zombies are possible; Mary can’t derive all the facts of subjective experience from her knowledge about vision for the same reason.
Thus, the whole theory (or theory-sketch) of how non-computational properties come to be relevant to phenomenal experience is really a bit of an add-on, just sort of an appetizer of where this whole thing might eventually lead.
So, is the significance of the von Neuman replicator that it would have to perceive itself in order to replicate itself? Darwin pointed out that all we are is derived from the lower animals. It’s just a matter of degree. Paramecia have fewer distractions.
I like your gear example. But the difference is that the brain is a self organizing system. The pattern of interconnection and weights of the synapses are the result of experience and therefore the structure itself is sensory information. The mechanical gear ratio is deterministic like the steps of a program in a numerical computer. By contrast the weights, and even the interconnection, of the synapses are unique to the experience of each brain. They are fluid and change with use or neglect.
Does the RNA ‘know’ what it is doing when it creates a new copy of the paramecium? It’s all a matter of degree. The answer requires an interpreter and the interpreter is consciousness. Interesting regression: consciousness = consciousness.
Not at all—I have to thank you and the others for the questions being asked, which have forced me to think about this stuff in new ways.
The difference is that the von Neumann replicator is essentially a function of mental content, including sensory data received from the outside world. So its evolution occurs in response to the state of the world, thus coming to be ‘about’ that state, in the sense of intentional content.
With the paramecium, on the other hand, there’s not really any such dependency—at least, not on the level of the individual. Sure, on evolutionary timescales, paramecia acquire an evolutionary dependency of the state of the world; but suppose this were to induce a conscious state, whose consciousness would that be? That of the genus paramecium? That of the family? The entire phylum, perhaps?
Ah but, then, what makes the replication process going on in my head be my consciousness, specifically? The answer, as I think you hint at, is that the mental state doesn’t just depend on the state of the world, but also of my past experiences. We’re not aware just of sensory data, but of our memories, our desires and beliefs, and perhaps most importantly, our sense of self.
So I’d say that it’s this thread of shared experiences that knits the individual von Neumann replicators together into a unified stream of consciousness, which is not something that happens in the case of the paramecium. Another option, suggested in the paper, would be that it’s not just individual, single replicators that acquire self-referential properties, but rather, entire populations of them, the whole ‘net’ making up a state of mind, with elements of it perhaps changing, adapting to new data, but other elements, large-scale properties, remaining constant.
But that, as above, is really a matter of how the theory is eventually fleshed out in full.
The brain models a von Neuman processor. But the recursive processes are not software routines that call themselves. The recursion is due to the brain storaging sensory input by modifying its own structure. Some of the map IS the territory (sorry Korzybski).
If the von Neuman replicator had binocular vision and lost one of it’s video cameras, it would have to wait for a repairman to replace the component. Lab animal experiments show that a brain would rewire its circuitry to increase the effectiveness of the other eye. And use experience to update its stored program. One of my daughters has vision in only one eye. Obviously she has no depth perception, but she drives a car. She is a computer programmer so we have discussed this in computer terms. She judges distance by the rate of change in size of perceived objects. So. she perceives and interprets the world differently than I do. She has depth perception only when she moves. If you map it to the ‘Mary sees red’ problem - if Mary sat still, she wouldn’t see red, whatever it was.
There’s an axiom that the components of an intelligent system do not have to be intelligent. That’s true but it is also axiomatic that the components of the structure of an intelligent system have to be capable of creating intelligence. I propose that the ability to create intelligence is not a property of adding machines because their structure is not the result of experience. I lack any ability to provide an elegant proof.
If you put a map of an island down on that island, there will be a point where the map and the territory agree—if the map is perfectly detailed, it will show a tiny copy of itself at the location where it’s placed, which will have that same tiny copy within itself, and so on; this is in fact analogous to the homunculus regress, and, to me, is just the point where modeling breaks down. For consider that we have a ‘map’ of our surroundings including ourselves (meaning, our physical bodies) in our brains; there must then be some similar point where the map maps to itself (a fixed point). I think that ultimately, that’s where we imagine the map’s user to reside, the infinite regress getting papered over by a level at which the map just seems interpreted—the self, in other words, which then is just a sort of artifact of how we model the world.
About the von Neumann architecture (i. e. his design for a computer, as opposed to the replicating devices—the man really has made too many contributions to too many fields, it gets hard to keep them apart!), I think that the conscious, model-based part of our thoughts plausibly can be understood in such a way (albeit of course on a background of non-computational properties giving the mental computations their definite character), but there’s another mode of thinking that’s more like a neural network’s implicit facility of recognition. Psychologists speak of ‘System 1’ and ‘System 2’, with ‘System 1’ being the automatic, implicit, intuitive and unconscious ‘neural network’-style cognition, while ‘System 2’ is the conscious, explicit, deliberate process of step-by-step reasoning we most often associate simply with ‘thought’. Both, I believe, have their role to play; I discuss this, as well as the ideas on the self above and some more (and how all that relates to notions of Buddhist philosophy, of all things), in this article, if you’re interested.
Replicators evolve; if something happens to it, reducing its ‘fitness’, but not destroying its ability to self-replicate, later generations may overcome that defect.
And of course, in a more general sense, there’s nothing that prevents any machine from being self-healing or -repairing to some degree, and even adapting its data processing. If you cut some connections in a neural network, its performance will decrease initially, but upon further training, will increase again, as it adapts the weights of other connections based on its success rate.
This is an interesting example, in that it connects a clearly functional capacity—calculating distances based on visual data—with an element of phenomenal experience—that of a three-dimensional world. It appears that a the functional distinction between one-eyed perceivers that are able to perform that computation, and those that aren’t, would lead to a difference in depth perception. I need to think about this more.
I do think experience has its role to play—I hint at it in the article, in that experience (in the sense of ‘past experience’, rather than ‘phenomenal experience’) may be what yields the possibility of associating the same sensory input (say, of Alice’s box) with different computational structures. I think one needs a mechanism to integrate this experience with current perceptual data, though, which is where the replicators come in.
Thanks for the link. Reminds me of my courses in General Semantics.
I believe “Everything is water” explains a lot. I was a member of the team that built the voice recognition system (Shoebox) for the Seattle Worlds Fair. Part of the process was to investigate the system built by an IBM engineer named Clapper. The Clapper system used an array of measurements with the output of each one connected to a light bulb. So whatever was being measured either turned a light on or off for any given word (1-10). The lights were arranged in a panel about 2x3 feet and there must have been a hundred of them. You spoke into a microphone and you got a pattern of lights. After you ran through the vocabulary three or four times you could tell what word was spoken by the pattern displayed. In fact the thing worked very well. You could always tell what word generated any pattern. The problem was that no individual light always turned on for a given word. If you repeated ‘six’ over and over, you would get different lights each time , but the pattern of the lights would be similar.
We reproduced the Clapper system using a giant multi-channel strip chart recorder. You could easily recognize the patterns visually, but there was no unique combination of outputs that would identify each word. Viewed together they were consistently recognizable. It was like water. You could see the waves but you couldn’t grab one.
I believe the mind works something like that. The data is not a discrete value but looks more like a puzzle piece shaped like a normal distribution curve with attachment shapes on both ends. Everything is water.
HMHW, I’ll have to read and read to understand all that you are saying, it is incredibly interesting. My mind keeps simplifying your concept of non-computational down to the inclusion of random factors (abundant as input in the real world), and when incorporated into multiple otherwise computational processes there will be no regression because they will all be different. If I keep reading maybe I can figure out why this is wrong.
I think this is not terribly far from the ‘System 1’-kind of thinking I’ve mentioned. There is not ultimately any (short) system of rules that governs, for example, when a given neural net will make any specific identification—neural nets can’t be easily transformed into expert systems. That’s in part the reason for the drive behind explainable AI methods.
As for the noncomputable, your intuition is, in fact, spot on: any noncomputable function can be realized by an algorithm augmented by a fixed (algorithmically) random infinite bitstring, so the noncomputable is exactly characterized by ‘computable + random’ (this is the Kucera-Gacs theorem, basically). But of course, the notion of randomness is inherently a computational one—it just means that there exists no program capable of producing the digits of the string. Which is in a sense trivial when talking about the noncomputable.
Besides, such randomness is far from useless. Take the following cop-and-robber game: there are two houses; in the beginning, the cop is in one, the robber in the other. In each round, either can choose to switch, i. e. move to the other house, or stay in the one they’re in right now. If they ever end up in the same house, the cop catches the robber.
Suppose the cop follows a fixed algorithm that dictates whether to stay or switch. Then, there will always be at least one robber capable of eluding her indefinitely: the one that knows the algorithm, and is thus capable of anticipating the cop’s moves. In a sense, this is a simple analogy to Gödelian phenomena.
But now suppose the cop has access to a source of randomness. Now, any robber who follows any algorithm whatever will eventually (with probability one) be caught; moreover, even a robber that has access to randomness themselves will almost surely be caught (that is, with probability one in the limit of many rounds). That is, the randomness has transformed the game, from one where there were strategies for the robber to always win, to one where the cop always eventually triumphs. That’s not just a win for justice, but also, I think, an important insight: randomness can enable certain quite deterministic abilities!
Moreover, suppose the cop eventually gets tired of robber-catching, and chooses to confer her skills to the next generation, taking up teaching at the academy. There’s no way she could just tell anybody—there’s no recipe, no method to share. The only way to learn would be to have some input from the environment—some source of randomness. But this sort of inability to communicate something is just what we see with subjective experience, as well—you can never tell a blind person what red looks like.
However, I’ve been so far exclusively talking about the noncomputable. In my view, this is distinct from the noncomputational—the noncomputable is still structural, it’s still a function that one can pose within the same realm as the computable ones, it just so happens that no computer can actually compute it. The noncomputational, on the other hand, transcends structure—it’s the intrinsic properties needed to ground structure. This stands to the structural in the same relation as the set of natural numbers stands to the Peano axioms. From the point of view of the Peano axioms, undecidable statements are random in the algorithmic sense—indeed, the quintessential random number, Chaitin’s halting probability, is characterized by the fact that all but a finite number of its digits correspond to undecidable (in the Gödelian sense) statements.
However, from the point of view of the natural numbers themselves, undecidable statements are simply either true or false—the Gödel sentence, for example, simply states a true proposition. Hence, this ‘randomness’ is ultimately a consequence of the fact that in describing something, we really only have access to the structural, whence computable.
Just parenthetically, I believe that the noncomputable—the structural facts about the world we can’t derive from the axioms, so to speak—has its role to play, as well: as I’ve argued, it yields the origin of quantum phenomena. Here, we circle back to the idea of describing the noncomputable as ‘computation + randomness’: an intelligent agent, in a noncomputable world, will typically find that a part of the data it collects is susceptible to compression—has some deterministic law behind it—while another part is irreducibly random. That’s, however, exactly what we see in quantum mechanics.
One day, I’ll learn the value of brevity. Today is not that day. But to try: yes, the noncomputable is essentially the computable + randomness; however, it may enable capacities that deterministically outstrip those of computable agents. Furthermore, the computable is just the structural shadow of an underlying, noncomputational, intrinsic substrate, and apparent ‘randomness’ just an artifact of computable description. Also, mumble mumble quantum mechanics mumble.