Conscious experience is self-representational access to intrinsic properties

I am somewhat scandalously pleased to announce that my new paper, “Self-Reference, Self-Representation, and the Logic of Intentionality”, has just been published in Erkenntnis (see here for a freely-available preprint version).

It’s the culmination (for now) of work on applying an idea of John von Neumann on self-reproduction to the question of intentionality—broadly, how symbols acquire their meanings. In the paper, I present what I think is the first mathematical theory of intentionality, the nicest result of which is that the mind-state inducing an action aimed at bringing about a certain goal has that goal as its object—and moreover, can prove that fact (to itself).

Furthermore, I argue that conscious experience is essentially self-representational access to intrinsic properties—in other words, that when what Eddington called the ‘inner un-get-atable’ nature of matter is brought under the purview of the self-referential von Neumann process, that’s what we call ‘consciousness’ or ‘qualia’ or ‘phenomenal experience’. This leaves the Hard Problem unsolvable (them things being un-get-atable and all), but gives qualia a proper job in the world: at certain points, the von Neumann process encounters questions that can’t be decided by any theoretical model of itself, but whose answers it can introspect.

We’ve discussed parts of this model earlier in this thread, so I thought I’d just throw this up here to see if there’s any interest. There were some questions left open that I hope are addressed in the current article—most notably, the question of how exactly the self-reference of the von Neumann construction is turned into ‘outward-directed’ reference (I think in particular raised by @RaftPeople(?)), which is proposed to be accomplished by the fixed-point property of modal provability logic (if you want to dig into that, especially as applied to self-modifying machines, a good introduction can be found in this pdf).

Also, I have written up some (hopefully) accessible summaries for my column on 3 Quarks Daily:

Thanks, I’ll give it a try. I might have rated 60% comprehension on your last publication.

I’m up for a dive into anything that doesn’t embrace the oversimplifications of “determinism” or “free will” as oppositionals.

If you have any questions, I’d be happy to try and answer them!

This particular work doesn’t really get into the issue of free will, though I do think it’s compatible with a more refined take on the relationship between laws of nature and free action. But that’s really another topic.

So this hasn’t gotten a ton of traction yet, but I’m bumping it to say that my popular-level overview of the issues explored in the paper is online now at 3 Quarks Daily:

@Half_Man_Half_Wit is this the thread where we should continue the discussion that was off topic to the AI thread?

Yeah, this is the one.

Alright, perfect.

So, here’s my issue. I think your model is interesting, and makes for a good read. But it postulates and requires the existence of an inner un-get-atable nature, something which I propose isn’t required to explain any of our observations and whose existence we cannot test for.

Let’s use a chair as an example. When I look at a chair, what is actually happening? Photons from the sun or a nearby light bulb travel towards the chair. When they hit it, they bounce off. This has nothing to do with the chairbeing a chair; it has to do with the way particles (like the ones the chair is made of) interact with light, and the pattern of particles as they are arranged in the chair, etc.

The reflected light then travels outwards, until two separate batches of photons enter each of my eyes. Blurry photons are focused into a pair of distinct images in each of my two eyes. This is converted into electrical information through an algorithm created by random iteration during evolution (in fact, that’s how the lens and retina and everything else came to be too) which is passed along the optic nerve to the brain.

From there, various parts of my brain process the information. It combines with other sources of stimulus (our five generally accepted senses, but also balance, the feel for where in 3d space each part of our body is relative to other parts, etc); together, these stimuli combine to create experience.

A “chair” is simply a set of socially defined properties than an object we come across may have. There is no “inner un-get-atable” concept of “chair-ness”; a chair is a social construct. This should be obvious: we can imagine a culture where no distinction is made between “chair” and “couch” and “stool”, or one in which there’s no distinciton between an artificial chair and any flat topped rock or log that could be used as a seat.

I think that assuming that a “chair” is a real thing that is more meaningful than the individual particles that make it up or the “office space” that the chair is a part of is unfair wetware bias. Our monkey brains evolved to allow us to survive and reproduce on the savanna. Our ability to perceive the world and our place in it is a survival strategy, and comprehending quantum mechanics or the size of the universe is simply not something our minds ever faced selective pressure to do. But that doesn’t mean that this perspective is privileged or any more real than any other perpective; it’s just the perspective we happen to have.

So I guess my issue is that I don’t understand how anything can have an “un-get-atable” nature intrinsic to it when saying that a “thing” exists is just a matter of perspective. This implies that the “un-get-atable nature” exists only as an interpretation in the mind of a conscious observer, unless I’m missing something key.

I have not given your model enough of a read to completely follow it, but to the level I have it seems to evoke some aspects of Hofstadter’s strange loops? Would you consider that an accurate take.

I also have not read the thread that led to this being bumped, but I can understand the strange loop model as a metric for degrees of self-awareness; how would your model apply to artificial or alien candidates of intelligence?

Jumping this discussion to this thread, if that’s alright -

I’m gonna need slightly more convincing than that. I don’t find the fact that you, a human of a certain culture, identify certain collections of stimuli as “cat” to be somehow more meaningful than an embodied chatbot like PALM-E being able to do the same, and I still don’t understand what “inner un-get-atable nature” you can get at through your memory of all the past times that a collection of stimuli was identified as a “cat” that PALM-E doesn’t also have access to.

This isn’t in any way a consequence of my model. I think the analogy with formal mathematical systems may be helpful. The natural numbers are (abstract) things. We have a theory of them, given by a set of axioms. These axioms give the structure of the natural numbers, but, as all structure, are insufficient in fully specifying them: there are multiple objects that fit that structure, and on the basis of pure structure, we can’t decide between them. But when we talk about ‘the natural numbers’, we mean a particular one of these objects. The natural numbers themselves then transcend the structure given by the axioms.

So my model is absolutely neutral with respect to what we call ‘real’. If there are particles, fields, strings, or loops: it doesn’t matter. What does matter is that any theoretical access to these entities—and all that is constructed from them—is incomplete: it suffers from structural underdetermination, like a paint-by-numbers image where you don’t know what number corresponds to which color. Overcoming this underdetermination is what the intrinsic properties are supposed to do: to make the abstract concrete, in a sense. Like picking out the natural numbers (mathematically, the ‘standard model’ of the Peano axioms) from all the possibilities left open by the axioms.

So I’m not attaching any special reality to chairs, or anything at all. What’s real is left completely open in my model, it’s just that whatever it is, theory alone doesn’t fully specify it. This is not, in itself, a particularly new or shocking point—discussion of quiddities goes back to Aristotle at least. My approach is broadly one of Russellian monism:

Russellian monism can be seen as combining three core theses: structuralism about physics, which states that physics describes the world only in terms of its spatiotemporal structure and dynamics; realism about quiddities, which states that there are quiddities, that is, properties that underlie the structure and dynamics physics describes; and quidditism about consciousness, which states that quiddities are relevant to consciousness.

The new thing I bring to the table is to give a mechanism—the von Neumann process—by which these intrinsic properties become present to the mind, which in turn gives them a ‘proper job’, namely, to ‘nail down’ reference of mental symbols to concrete things in the world (via the modal fixed point theorem).

There’s a similarity there. In fact, I recently discovered that in I am a Strange Loop, Hofstadter briefly speculates about the von Neumann mechanism as being at the heart of the emergence of mind; in a way, my model just makes that more explicit.

This is, I suppose, where Hofstadter and I part ways: my model is explicitly uncomputable, hence, any approach to give rise to consciousness merely by computation won’t work. That doesn’t mean artificial consciousness is impossible, but there would have to be a physically instantiated mechanism equivalent to the von Neumann process.

Actually, one upshot of the model is that one ought to be able to detect the substructure of consciousness within the physical makeup of a system, in the form of certain recursive re-entrant processes. In fact, I have speculated that the thalamo-cortical loops in the brain might serve as a realization of the von Neumann mechanism, with the ‘tape’ being set up in the thalamus by perception, which ‘constructs’ a cortical activation pattern, that in turn reads and modifies the thalamic activation, similarly to the active blackboard-model of David Mumford.

Well, the difference is the concept, the meaning associated with the stimulus: the term ‘cat’ refers to something to me, but not to ChatGPT, or any AI that has access to only relationships between tokens (i.e. only to structure). To make this possible, the intrinsic properties of the von Neumann-construction need to be present to itself, as only then it is able to overcome the limitations of structural underdetermination.

I don’t understand the sense in which you write that your model is a “physical” model. I’ll also warn that I can’t comprehend the symbols in section 3 (a box, a squiggly arrow pointing right, a double sided arrow, a Greek phi, a Greek phi with a squiggly on top, an upside-down T).

As an abstract concept, I can understand modeling the mind as a von Neuman esque automaton. Not the brain. The brain doesn’t physically reproduce and replace itself every few seconds. It seems to have a much more efficient method of self-modification.

I struggle to comprehend Gödel’s incompleteness theorem, but I will note that unlike say a formal abstract mathematics system that includes a set of all natural numbers, there are only a finite number of states in any physical machine. There are no infinite sets in the ‘language’ of a brain, only ‘lazy’ mental constructs that are filled on an as-needed basis. We may never be able to enumerate all the possible configurations of neurons for a human brain, but in theory, there is a finite number (bounded by the size of the skull, if nothing else - in theory, even the halting problem could be solved by mapping each state to a successor state). What I don’t understand, and maybe you can tell me, are the implications this has on attempting to apply Gödel’s incompleteness theorem as you seem to have done.

~Max

It’s a materialist model in that there is only ordinary matter, nothing ‘extra’—no souls, no ghosts in the machine, no substance dualism. In calling it a variety of physicalism, I follow the usage of Strawson (link goes to pdf download), who distinguishes between physicalism and physics-alism. The former just means that everything in the universe is physical, whereas the latter includes a further commitment that everything can be made sense of by means of physical theories. This is a genuine further metaphysical hypothesis: even if everything is physical, there is no reason to think that everything should be captured by what a largely hairless primate on an unremarkable rock orbiting a small star calls a ‘theory of physics’.

The situation, as alluded to above, parallels that of mathematics. There, the theory of the natural numbers—the Peano axioms—fails to exhaustively describe its intended object, i.e. the natural numbers as a mathematical entity: there are statements that are true about the numbers, which the theory can’t decide. Likewise, I content that there are similar lacunae in physical theories, which thus fail to fully capture the nature of matter. But the natural numbers are still mathematical entities; likewise, matter still is physical. In this sense, my model is physicalist, but not physics-alist.

Well, I’m not appealing to Gödelian incompleteness directly, but to Löb’s theorem, which is a more general notion (in fact, Gödel’s second incompleteness theorem follows from Löb’s as a special case). This yields a sort of ‘upper’ boundary to what any given system could prove about itself, even in the limit of infinite computing time.

Essentially, these phenomena force a sort of threefold partitioning of the properties present in nature, if viewed through a theoretical lens (so this is just an epistemic, not an ontic, phenomenon: it’s only due to the limits of our theories that this splitting occurs, reality is—or can be, as far as the model is concerned—unifies). First, there are the structural, decidable (or computable) properties. These are, in the analogy to math, what can be proven from a set of axioms. Then, there are the undecidable properties. These are still on the level of structure in that they can be phrased in the language of the axioms, but can’t be decided on that level. Then, there are the properties of the object of the theory—of the natural numbers themselves, so to speak. If you had access to the natural numbers, you wouldn’t have any need to theoretically decide their properties; you could just see what’s true or false about them, in the same way you can just see a car is red, and don’t have to derive that fact from some axioms about it.

Hence, access to non-structural properties allows leapfrogging the boundary discussed above: the von Neumann replicator doesn’t have to prove that something is true about itself to itself, it just has to be true. Due to the self-referential nature of these statements, their truth is equivalent to access to their truth; hence, the von Neumann replicator knows things about itself that can’t be proven by any theoretical means.

I’m going to pull this discussion over to this thread, because it’s getting way off topic in the ChatGPT one.

First of all, I think it’s a category error to require testability for metaphysical notions. Testability is appropriate for scientific theories, but only because the attendant metaphysics is taken as given (and often enough, just accepted without examination). There’s no science without metaphysical baggage, there’s only science with unexamined metaphysical baggage, which it is typically all the worse for. And the metaphysical basis for statements like ‘all statements about the nature of reality should be testable’ is itself not testable, of course. This is the problem the positivists never quite got round to answering.

As for whether there’s a problem at hand, of course, opinions credibly differ on that front. I’m not convinced by the arguments of those that claim there isn’t, but if you are, that’s fine—you won’t have any need for something like my model, which really only arose out of a sort of desperation: wanting to keep what I think is good and right about metaphysical naturalism, while enabling it to confront the problem of consciousness head-on. I think that this is likely to be the most conservative way of achieving both goals; and anyway, it’s the only way I have seen so far that, to me, seems like it could actually work.

People seem to think that I’m introducing something mystical without need, but that’s the opposite of what I’m doing. I started out trying to find a naturalized theory of intentionality, and found that, in order to make it work, I had to find some way to go beyond the merely structural. Thus, the intrinsic properties I’m proposing answer a concrete need I have seen no other way to meet. If that doesn’t agree with a computationalist intuition, well, nobody has any guarantees that the universe should work according to their predilections—I had to be dragged away from it kicking and screaming myself, so I know what that’s like. Everything I’m proposing is there just because I saw no other way to make things work.

You may be interested in reading some of Stephen Grossberg’s work: Adaptive Resonance Theory.

https://www.sciencedirect.com/science/article/pii/S0893608016301800?via%3Dihub

Trying to parse out your model I readily admit I get confused right off. It seems to be that it explains the conscious mind by first assuming one, as something has to hold the (incomplete) theory, to create it, in the first place?

DO they? What is the natural number “4” without a number line and without 4 individuals to count?

“4” cannot exist in a vacuum. The structure of the number line is what makes “4” 4.

Here’s my problem. Where do the intrinsic properties of a “chair” come from, when there intrinsically is no such thing as a chair, just properties resulting from the interactions of lower level processes which together create a set of features that we can measure (through sight, touch, sound, etc); and a certain set of properties which, when taken together, matches the socially constructed idea of a “chair” that we have in our mind?

So propose that theory is incomplete, not the existence of an ethereal immeasurable untestable properly called the soul un-get-atable inner nature?

I’ve never seen a proboscis monkey in person. I’ve seen them in documentaries and movies, and I’ve read about them, but I’ve never seen one.

Why do I, based only on video and text, have an “understanding” of a proboscis monkey, but PALM-E - an AI that can read, see, and hear - does not? Why do I have access to the magical inner un-get-atable nature of the majestic proboscis monkey, but PALM-E does not?

An “un-get-atable” nature is either an emergent property of its physical components, in which case it’s not “un-get-atable” at all, you simply need to understand how it arises from components; or it IS extra.

It’s a matter of practicality. If your theory is untestable, that certainly doesn’t mean it’s wrong. It COULD be true that God created the world in seven days and planted a bunch of fossils that already looked millions of years old just to fool us. It COULD be true that there ARE no laws of physics, that every object has an Animist spirit living within it, and that when a rock falls down accelerating at 9.8 m/s^2 this is not due to gravity but due to a conscious decision by the Rock Spirits. It COULD be true that objects have an “inner un-get-atable” nature. But since none of these theories are falsifiable, I have no reason to believe any of them over the other.

I think you can recognize that as similar to many folk story explanations of natural phenomena?

Just because I see no explanation other than a god pulling a chariot across the sky does not mean that an explanation beyond my current understanding does not exist.

To me “minds” have bootstrapped into existence from less sentient forms of consciousness, both evolutionarily and ontogenically.

These forms of sentience emerge as a consequence of embedded layers of pattern recognition that serve predictions from completing shapes out noisy inputs, to higher level abstract concepts and complex theories at higher levels of sentience.

Models that explain how minds evolved from early organisms to sentience, how they develop from the more reflexive responses of infants to adult philosophers, how such might apply to putative forms of sentience other than our own, be they machine, cetacean, or possibly one day truly off world alien, hold most interest to me.

Does your model apply to those questions?