Why Consciousness is not Computation

I should clarify that—by the above, I mean that an A-gate behaves such that if one, but not both, of its inputs are ‘h’, its output will be ‘h’; a B-gate behaves such that if and only if both its inputs are ‘h’, its output will be ‘h’; and a C-gate behaves such that if either or both of its inputs are ‘h’, its output will be ‘h’. Calling them XOR/AND/OR-gates of course is already an interpretational matter, taking ‘h’ to mean ‘1’ and ‘l’ to mean ‘0’.

I don’t think your usage is standard here for the specific context I’m referring to. But that’s fine, I’m not out to quibble with definitions.

I just think an overwhelming majority would say that two people who click on the same icon on the same computer desktop in order to compute two different functions would be using the same program. There are language issues here.

I don’t think “compute sums” is some Platonic ideal of reality.

We call a set of creatures “horse” when each and every one of those animals has a different number of atoms. Our brain generalizes, and then on an intuitive, emotional level treats the abstraction as something ontologically real. From simple introspection, it’s dead easy to see why Plato fell into the belief of a world of ideal forms, outside and above any real horse, because our brains rely so heavily on the abstraction that it literally feels to have a notion of tangible reality behind it, irrespective of any “real” horse. The mental generalization feels real because it is so incredibly useful.

This is the same reason behind the interminable 0.999… = 1 discussion. Naive learners have in mind an abstraction, or a generalization, or interpretation, or something, of decimal representations of numbers, and that naive interpretation immediately becomes something that feels tangible and ontologically real inside their heads. And according to that interpretation, 0.999… =/= 1. This is a key point: A lot of these Platonic abstractions feel real inside our heads, but that’s not indicative of them actually corresponding with physical things (or with the formally defined real number system).

Ultimately a calculator is a set of atoms (quarks, etc.). Does it calculate? If a squirrel walks up to one and accidentally presses 6 followed by + followed by 5, did it “calculate” when a pattern that looks like 11 shows on screen?

What actually happened was that voltages shifted. How I personally describe the situation is going to depend on what level of generalization I find most useful at the moment. Most of the time, I’m going to fall into the intuitive feeling that there is a Platonic form of “implements sums”, and so I’ll be happy to say that a calculator calculates. If the context of the discussion changes, then I’ll stop talking about calculators calculating and start talking about voltages.

The fundamental reality is voltages (atoms, quarks, etc.).

This is why I don’t know what other people think. I don’t know how “real” they treat the generalization inside their minds. In my experience? Most people treat that generalization as the One Real Thing, despite whatever discordance with reality as we understand it that can be pointed out. For the overwhelming majority, “reality” is purely intuitive. Anything that feels real, is real. As we dig deeper, we’re stuck with intuitions that are direct opposition, and have to think our way through which intuition seems more reliable. But most people don’t do this.

Philosophy is almost always a matter of competing intuitions. I’m relying on one intuition, and maybe that leads to one of these contradictions. I will have to rely on a separate intuition (the strength of a logical argument), and balance which one I think is reliable, and which one I think is steering me wrong in this case. This is why I say I “lean” one way on this, not that I’m definitely totally right.

“Here’s a specific situation where people don’t speak that way. In this situation, people clearly don’t speak in this manner.”

“Here’s a different context where people do speak in this manner, therefore people do speak this way.”

I wasn’t making a statement of the universe. I was using a specific example of how people use language in a specific context. In the specific context I was referring to, people do not distinguish “program” in the manner you are relying on here. When I point that out, you seem to 1) deny the commonness of usage in my context (which is a mistake), and then 2) you bring up a different context (where you’re correct). There are language issues here.

Let me do another example.

I build a robot body, with my own designed robot hands, eyes, skin, etc. The hands take sensory inputs, transfer those into voltages. I do the brain scan, and set up a sequence of voltages that seems to match my brain. The pattern of transistors firing depends on the inputs, voltages from the hands, from the eyes, etc., according to how the sensors on the hands turn tactile contact with the outside world into volts. Then the transistors fire, voltages get sent back out, which eventually end up moving gears, motors, robot legs. The robot moves around, walks, talks, fries an egg, plays some basketball. Reads my favorite book.

My neighbor builds a robot body, based on similar principles, but with their own proprietary robot hands, eyes, skin, etc. The hands take sensory inputs, transfer those into voltages. Neighbor does a brain scan, creates a system of voltages that seems to match. The pattern of transistors firing depends on voltage inputs coming from the hands (which work differently from my hands) eyes (same), etc. This robot, gets up, looks around, moves around, talks, pours some milk into a glass, then pours it out when it realizes it doesn’t need it, plays some soccer. Watches the neighbor’s favorite TV show.

After a day of operation, we shut down and compare.

The sequence of voltages that happened inside my robot exactly matched the sequence of voltages that happened inside the neighbor’s robot. Exactly. From different tactile contacts, the hands sent out the same volts, which caused the same transistors to fire, which caused volts to be picked up by the legs… at which point the motors took different actions, because the motors themselves responded to identical volts differently. The eyes saw different things, but fired the same volts on seeing different things. The motors picked up the same volts, but moved differently on seeing those volts.

Is this what you’re talking about?

wit,

Thanks again.

The processes discussed by RaftPeople in #46 are massively parallel. Von Neuman placed the program and data in the same memory space, but the process is still serial.

Seeley postulates that his bees act as neurons during the planning and migration of the swarm. ‘A million bugs with brains the size of match heads rise and fly (navigate) several miles to a place they have never seen’. The processes that created the event were serial and transient. No map of the process was created for conscious interpretation. The swarm is not conscious.

I cannot state it as elegantly as you, but consciousness must be an emergent property of structure not merely process.

I haven’t read the whole thread, but I enjoyed the OP.

I suppose I agreed with the premise just from the thread title. I began thinking of consciousness as a touch screen monitor a few years ago. Mostly what it does is give a tiny echo of all the processing that actually happened under the hood. You might touch the screen to offer a small input, and watch as it completely modifies that processing echo. The monitor isn’t doing any processing; it’s just accepting moderate inputs and feeding the user a small view of the universe that is the activity within the CPU.

The key, though, is that the monitor and the CPU are mostly independent of each other.

Well, it’s standard enough that a couple of referees for a journal whose scope ‘explicitly encompasses philosophical aspects of computer science’ didn’t find anything worth remarking upon, which I think is just going to be good enough for my purposes.

How would they even know it’s supposed to be the same program? If I click on something, and the computer does x, and you click on something, and it does y, are you really saying that the reasonable stance is that the computer is probably really doing the same thing?

From another point of view, and American and a German both read ‘gift’. Do you think it’s the same word for both?

So, I can’ really figure out what you’re saying. Either, you’re saying that there’s no such thing as sums, so nobody has ever computed any, and we’re just grouping ‘2 + 3 = 5’ and ‘II + III = V’ under the same heading by, what, force of habit? Convention?

Or, you’re saying that ‘sums’ somehow just are categories of physical stuff, maybe something emergent, or what have you.

Both points of view throw the idea of the computational theory of mind right out—the first case leads to eliminativism, the viewpoint that there really isn’t anything we talk about when we talk about ‘mental states’ or ‘subjective experience’ and the like, and the second leads to identity physicalism, by essentially again just saying that ‘computation’ is just a fancy word we use for ‘physics’ every now and then, for no clearly specifiable reason.

Sure. But if you call some assembly of atoms ‘horse’, and I call that same assembly ‘boat’, then one of us is wrong. ‘Horse’ may be an unsharp category, and there may be issues of vagueness, but that’s just a question of how we group things; but given some set of conventions, what’s a horse and what’s not is a perfectly sensible, empirically decidable question. But in my example, exactly the same atoms, at exactly the same time, in exactly the same way, compute entirely different things.

Perhaps take a look at my post to Sam Stone above. I asked him what the box, as described, computes. I’d like to ask you the same. Is there an objective computation associated with it? Is it one of Alice’s, Bob’s, or Charlie’s? Or is it another one entirely?

Or, let’s take the abstraction/representation-account of computation as given. Do you agree that then, there’s different computations that can be associated to the same physical system, in a perfectly clear and intelligible manner? Why then do you think that’s such an out there idea?

OK, so then computations are maybe generalizations of physical behavior. That works just as well as ‘interpretation’—the same conclusion, that then, the faculty of generalization itself can’t be a computation, applies. Because then it would mean that the computation we use to generalize some physical system’s behavior into a computation would itself have to be a generalization (from our brain’s physical behavior, presumably), which sets us off into infinite regress—that generalization again having to be generalized from something else, and so on—or bottoming out in the non-computational.

Are you asking me? Because of course, my answer is going to be ‘no’. Nobody is interpreting the calculator; in terms of the A/R account, nobody has any theory of that calculator, that leads to its states being abstractly represented. Hence, there’s no computation, there just is the system’s bare physical evolution.

If bird droppings on the hood of your car form the shape of the letters ‘gift’, does it say something about a present? Or does it maybe say something about poison? Absent interpretation, it says nothing at all—it’s just a particular physical pattern. Likewise, absent interpretation, the succession of states a machine traverses don’t compute anything; it’s just some particular physical pattern stretched out across time.

But that’s just what I’m saying. Without you applying that level of generalization—utilizing some interpretation, I would say—there’s just no computation happening. Which then means that computation isn’t an objective property of a physical system; it’s something that depends on somebody generalizing, interpreting it. Which then means that computation can’t be the ground for mind, because whether something has a mind, is conscious, doesn’t depend on whether you generalize it, or interpret it, in the right way.

You said that ‘as a matter of plain language, people do not speak that way’. But it’s perfectly plain language to speak of a calculator as computing sums. Hell, you yourself make the point that it’s the ordinary, unreflected intuition to speak that way and believe that to be the case.

I’m not relying on any such distinction. I’m talking about how you or I or Joe Q Public, when using their calculator, is liable to say ‘I used my calculator to compute the sum of 5 and 6, which turned out to be eleven’. In that perfectly ordinary manner of speaking, Alice, Bob, and Charlie will consider the box to perform different computations. The A/R-theory explains how, and my article is (in part) an attempt at providing a theory of how the representation relation it relies on might actually work.

No. That’d be just a matter of having the actuators and sensors wired up to the internal electronic in a different way—i. e. the voltage pattern making one actuator, say, picking a flower, would make the other extend a friendly greeting, or what have you. That’s no different than, for instance, having the same engine, running in the same gear, drive a millstone, or a generator, or a car—just a difference in the translation, so to speak. It’s simply a concrete physical difference in the implemented mechanism.

The point of my example is that there’s precisely no such difference in the box. Its behavior is as specified above; yet, without being ‘hooked up’ to the world in any different way, it can be used to implement different computations. Implement them, in the sense that Alice can go to the box with the thought ‘Hmm, I wonder what the sum of 3 and 2 is’, pose that question to the box, and come away with ‘Ah, it’s 5!’. That’s all I mean by computing something, and in that very plain everyday sense, Bob comes at the same box with the question ‘Hmm, I wonder what f[sub]B[/sub](0, 1) is’, and, after exactly the same operation that Alice performs, perhaps even done at the same time, comes away with the answer ‘Ah, so it’s 2!’.

This is the sense of computation we use when we say ‘a calculator computes sums’, and in this sense of computation, there’s no definite fact of the matter regarding what a given physical system computes.

Ah, sorry, I should’ve pointed out that I’m not referring to von Neumann’s computing architecture when I make references to the ‘von Neumann process’, but rather, to his proposal for self-replicating, evolving systems. This plays a key role in my theory, essentially using the intrinsic properties I view underlying, or bearing, the relational structure (of, for instance, computations) to give them meaning—thus being a proposed solution to the problem of intentionality, the question of how thoughts, beliefs, desires and so on can be ‘directed at’ or ‘about’ things beyond themselves.

Well, I believe that consciousness, in fact, must transcend structure—it must be related to the intrinsic properties of stuff, not merely emerge from the relations stuff enters into, so to speak. That’s why it’s so hard to characterize, scientifically, because, in agreement with Russell’s structuralism, I take science to essentially only tell us about relation (and to avoid Newman’s problem, thus needing to be grounded in intrinsic properties that must remain outside of it).

This is somewhat off-topic, but such representationalist theories, going back to Locke’s ‘camera obscura’, where the light of the outside world enters the mind as a ‘dark room’ being ‘projected’ on some wall, run into the trouble of the homunculus fallacy—because if the world outside is represented to us in some way, then how is that representation itself perceived? Does it need to be represented again, via another little viewer?

This is in fact just the problem I’m appealing to the von Neumann replication process to solve. Because the general problem of replication suffers from an analogous issues: to replicate, it seems, a system needs to have a copy of itself within it (this was the doctrine of ‘preformationism’). But a copy built on the basis of that blueprint wouldn’t actually be a copy, because it lacked that blueprint itself—so the copy ought to have its own tiny copy inside itself, within which there needs to be another copy, and so on, to infinite regress.

Von Neumann realized that a ‘two-tiered’ dynamics of interpreting and copying the blueprint could solve this issue; the blueprint is treated semantically, as meaning a certain assembly, and syntactically, as being just a string of symbols, in two separate steps of replication. That way, indefinite replication becomes possible.

I then propose to adapt that scenario to get rid of the homunculus in representationalist theories, as well—essentially, by creating representations capable of ‘looking at’ themselves (that’s basically section 3.6 of the paper).

By the way, I missed this earlier. I hadn’t heard of reservoir computing; I’ll have to educate myself on that, it’s an interesting concept. Thanks for the pointer!

Alright.

Rather than force you to continue to go in circles, I’ll just re-read things from here and chew on the idea.

Thank you for the effort in these posts.

HMHW, in table 1 the headings are: x1,x2,Fa(x1,x2),Fb(x1,x2),Fb(x1,x2)

Should the last one be Fc instead of Fb?

HMHW, in section 2.4, i’m confused by this sentence:
“An important point here is that with the theory, and hence, the representation relation, the implemented computation likewise changes.”

It seems like the physical system changed, and the abstract object represented by the physical system changed (in an abstract sense) and the change from Mp to Mp’ is the “computation”. It seems incorrect to say that the “computation likewise changes”.
What am I misunderstanding?

HMHW:
Section 2.5, p,p’ and q,q’ and m,m’

Note:
Not sure how to type in proper symbols
For that tilde looking bar over the R I’ll put “~” in front of it, others I’l just type in the same sequence of chars

Diagram has:
m’=>~Rtp=>q’
m=>~Rtp=>p

But if I consider something like an adder, implemented on a PC (“p”) and on a VAX (“q”), it doesn’t seem like ~Rtp can be used in both places. The instantiation relation between the model and the two different physical implementations would not be the same.

Is the diagram only valid if p and q are the same physical system? Or the relation represents some level of abstraction prior to the divergence of implementations by the VAX and the PC’s internal hardware. But I was assuming that relation was a fairly low level of detail.

As I read further, it seems the relation only needs to go down to the level of detail that is required to adequately map between the model and the physical, and no further (e.g. books and relative ordering of ancestors).

That makes sense, so ignore the previous question.

HMHW, congratulations on the publication of your paper. I’ve been corresponding on everyday trivia with a well-known cognitive scientist who is a strong proponent of the computational theory of mind. He is retired and in a frail state of health, and I said in a recent email that considering his worries about his health and the current pandemic situation and all the other worries he currently has, he may not be interested in critiquing a philosophical paper just now. I would have loved to get his input (and probably so would have you) but he wrote back to say I was right about that, and he had far too many other things to worry about right now. Not the least of which is that his hand-eye coordination is shot to the point that he can barely type on a keyboard and does most of his email via voice-to-text. So in these stressful times celebrate not only the publication of your paper, but your health, enthusiasm, and I presume relative youth.

I don’t have much to add to this discussion, having pretty much said my piece in the other thread. But these two things struck me as needing clarification or rebuttal:

I agree. The inherent fallacy of the lookup table argument is that it appeals to our “common sense” intuition that a lookup table is inherently trivial and limited. But if we choose to ignore the fact that a lookup table that emulates a human brain would take more resources than exist in the universe, and that the preconditions necessary for building it (e.g.- knowing all possible inputs) are equally impossible, and merely consider the logical and not the physical aspects, as HMHW chooses to do, then you’re postulating a lookup table that is nearly infinite in extent and a lookup processor that is nearly infinite in speed. If such a magical impossible thing could exist, such that it acted precisely as a particular human would to all possible inputs and stimuli, then it would be indistinguishable from that particular person and could correctly be said to possess all the same qualities that we attribute to a human mind.

We already had a similar discussion before, and I’d particularly point out this post, and particularly this sentence fragment: “… if real estate is cheap enough [and] the lifespan of a psychological model is predictable, and all its possible inputs are known, then that model may be optimized into a bizarre but still recognizable form: the humongous-table program. If the model has mental states then so does the HT program.

This is related to my previous claim that a sufficient quantitative increase in the capabilities of a system leads to a fundamental qualitative change in its properties. This is a pragmatic concept and not a formal philosophical one. Simply put, a simple lookup table may give you simple answers to a very limited set of simple inputs. A more complex one may lend an appearance of intelligence without actually having any – indeed, a very early AI program by Joseph Weizenbaum called ELIZA that attempted to engage in conversation in a primitive attempt to pass the Turing test was actually little more than a table lookup with some randomizations. But the impossible theoretical concept of a near-infinite lookup table with every possible series of inputs predefined could well constitute a system that not only acts intelligent, but genuinely is intelligent, and self-aware, or at least completely indistinguishable from a biological system like a human that is indisputably so.

I certainly agree that such a machine could eventually be built. But I’m obviously not understanding your argument because it seems self-contradictory – and indeed seems to contradict your thread title – as such a machine would almost certainly be made up entirely of computational components. So where did your infinite-regression argument go, where every computation requires interpretation?

A similar argument could be put a different way. Imagine we have the technology to replace a neuron in the brain with a silicon chip that performs exactly the same functions computationally. We then proceed to do the same with more and more neurons until each one has been replaced with an artificial chip. Will there be some qualitative change in the brain or the person’s behavior? At what point would that happen? Would consciousness still arise in that computational network? I think the answer is obvious.

For it to be an interesting claim, especially with respect to consciousness, you would need to support the claim in some way and show how it leads to consciousness.

I think that the thing to focus on, if you’re inclined to follow this further, is really the box-example. Think about what sort of claim is being made when Alice says ‘this box computes sums’, and how that claim relates to ‘this calculator computes sums’, or ‘this PC computes a simulation of the weather’. Then try and figure out how, only given the box, you could figure out whether Alice’s claim is right. Then the same for Bob’s and Charlie’s claims.

Alternatively, start with the A/R-account, and my example of how to use it to realize Alice’s or Bob’s functions for the box. Then ask yourself whether that isn’t just a perfectly sensible way of describing what happens when we use something to compute, and if not, what’s actually different in real-world examples.

Or, well, don’t, I mean, I don’t want to assign you homework here :slight_smile:

Yes; sorry.

Within any one given diagram, the representation relation has to stay the same, both at the start (where the physical system is in state p) and the end (where it’s in state p’). That’s how you implement a specific computation—you encode an abstract object into some state p by means of the inverse of R[sub]T[/sub]; then, you let that system evolve, under its physical dynamics H(p), into the final state p’; from that state, via R[sub]T[/sub], you read off the outcome of the computation.

With a different representation relation R[sub]T’[/sub], furnished by a different theory T’, you then get the same system, under the same physical evolution, to implement a different computation—that would be a different diagram. The example for that is the two diagrams on page 13: under R[sub]A[/sub], the box going from the initial to the final state implements f[sub]A[/sub], and under R[sub]B[/sub], it implements f[sub]b[/sub].

Sorry, that’s another typo. The relation taking m’ to q’ should be ~R[sub]Tq[/sub]. Seems like I could’ve done a better job proofreading… :rolleyes:

Thank you!

That still seems like an obviously false claim to me. When I understand a sentence, I parse it, I sometimes have to circle back, I form an understanding of its elements, and how they relate, I might mull it over, engage in counterfactual explorations (“but what if…”), see what claim is made, how it relates to other items of knowledge, and so on.

The lookup table does none of that; it just checks whether the particular string of signs fits a given pattern, and, if it doesn’t, checks the next one, and does that over and over again, until it finds a hit. Granted, it does this a lot of times, but that changes nothing—or at least, we have no reason whatever to believe it does.

You could make the following inductive argument: 1. a lookup table with two elements has no mental states; 2. adding another element to a lookup table with no mental states will not suddenly grant it mental states; leading to 3. for any n, a lookup table with n elements does not have mental states.

To defeat it, you’d either have to show that a lookup table with two elements does have some mental states, or that adding one more element to a lookup table without mental states can lead to it having mental states. If you do that, you’ve got the beginning of an argument.

Still, though, any such putative ‘qualitative’ change is predicated on the properties of the individual elements that increase in quantity. Take, for instance, stellar fusion: two hydrogen atoms don’t undergo fusion, yet a sufficiently large pile of them will. Hey, qualitative change from quantitative increase! Doesn’t this invalidate my argument above?

Let’s see: 1. two hydrogen atoms don’t undergo fusion; 2. adding another hydrogen atom to a set not undergoing fusion won’t cause it to undergo fusion, hence 3. for any amount n of hydrogen atoms, it doesn’t undergo fusion.

Yet, it does! But as stipulated, the reason for that can be found in the reason for the failure of the inductive argument: for one, in fact, there’s some incredibly tiny probability for two hydrogen atoms to undergo fusion, by a tunneling process. But more importantly, whether hydrogen atoms fuse is determined by whether they possess enough energy to overcome their electrostatic repulsion; if they do, fusion may occur. The energy needed to do so is provided by the gravitational potential energy; this will increase, the more hydrogen atoms you pile up. Hence, for every further hydrogen atom you add, the difference between the energy needed and the energy present becomes smaller, until it is overcome, and fusion occurs.

Thus, it’s due to the properties of the individual elements that the ‘qualitative’ change occurs—which, from this point of view, isn’t a qualitative change, it’s just bringing about sufficient conditions for the manifestation of qualities inherent in the elements. Any sort of ‘emergence’ works this way, ultimately (or at least, no convincing example of ‘strong’ emergence that doesn’t has so far be found). Consequently, any argument that a certain quality may emerge upon an increase in mere quantity must first locate the necessary conditions for the manifestation of that quality within the individual constituents (or appeal to magic).

But this is actually a tangent not really related to my arguments. I’m not making the Chinese Room argument: that argument presumes to show (fallaciously, in my opinion) that a certain system following a program P capable of replicating the behavior of a system possessing understanding itself won’t possess understanding; but my argument is already aimed at there being no objective fact of the matter regarding whether a system follows a certain program P.

Well, it’s in the snippet you quote: the machine may be conscious, but not by virtue of the computation it implements. In the paper, I give a mechanism that, I think, is both sufficient and necessary (in some form) for conscious experience; that’s however not a computational mechanism. This mechanism (the von Neumann process), I think, is rather needed to explain how we can implement any computation at all; hence, its non-computational nature precisely follows from the regress argument.

I think the analogy to formal axiomatic systems and their models may be helpful. A system, such as the Peano axioms, can be instantiated in a mathematical object, such as the natural numbers. Anything a computer can say about the objects instantiating some formal system follows from the Peano axioms; yet, famously, those don’t suffice to decide every question one might have about such a given instantiating object (the usual term here is ‘model’, but as I’m using that term in a slightly different way in the paper, I’m trying to avoid it). In particular, the Gödel sentence G will be true in some such objects, and false in others, and no computer armed with the Peano axioms could ever figure out which.

In my terms, the axioms correspond to some structure, and the model itself corresponds to the structure-transcending, or intrinsic properties. If the analogy is apt, then there are properties of the structure-transcending that outstrip those of the structural—in other words, the same structure can be fulfilled by different intrinsic properties. Computation, then, is something limited to the structural: the representation relation of the abstraction/representation-account is something that instantiates a structural equivalence. Consequently, there are properties that go beyond the computational; these are what (by means of the von Neumann process) are present to us in subjective experience. (On something like a panpsychist view, these properties would themselves be experiential, perhaps.)

Consciousness would still plausibly arise, but not due to the computation being performed—just as it didn’t before the change. You’re just replacing one sort of building block with another, but that doesn’t mean that only the computational properties of both are identical.

Besides, the same threshold problems can be levied at the lookup table: at what point does the qualitative change occur? When does ‘fusion’ spark up, so to speak, and why does it do so at that point? Will a lookup table, increasing in size, grow from an imperceptibly small spark of awareness into full-blown knowing itself, or will that just blow up on the thirty googolionth element being added?

Ok, that’s what I thought you were saying, I just got tripped up a bit by that sentence, it seemed to imply there was a change in the single diagram.

Actually, that’s not quite right—the claim ‘If the model has mental states then so does the HT program’ as such might be perfectly true, but if so, would merely constitute a reductio of the idea that the model has mental states. So if computationalism entailed that such a lookup table must have mental states, then it seems to me that that would be computationalism refuted right there.

A non-essential thought, section 3.3:
A chain of ambiguous relationships does frequently provide a key for recall, eventually. But the dictionary and word example is a good example, I’ve struggled with that exact thing many times.

  1. “Why Consciousness is not Computation”

But, isn’t the dividing line still the definition of ‘computational’?

At a conference a while back a poster presentation described a neural net that used only pulse width modulated wave forms. No numerical operations were performed but the system components produced the same results as their hardware and software equivalents. Is this computation? Timed pulses seem closer to the description of neuronal activity than do adding machines.

  1. Does a Paramecium meet the Von Neuman requirement?

If biological activity is computational then the paramecium is a computer that can copy itself indefinitely. A gallon jar of water, a piece of lettuce and a few days will provide a repeatable demonstration of that fact.

I have not read and digested everything, but I think this is where I part company with you. When you think about the computation that a device is performing, you create a model of the computation in your mind. There is no reason to suppose that a computing device cannot create a virtual model of another computing device and query that. There is no “infinite regress”, each interpreter creates its own interpretation of the computation.