Conscious experience is self-representational access to intrinsic properties

But the question isn’t whether the model bears any relationship to reality, just whether it in principle works. I have no idea if the model is realized anywhere in the world, much less in human brains, and make no claims to that effect; I’m just claiming that it’s one way to solve a particular kind of problem, namely, that of how symbols acquire meanings, in a way compatible with metaphysical naturalism. Appealing to falsifiability here is as misguided as appealing to falsifiability regarding the question of whether there are infinitely many twin primes. My reasoning is either sound, or it’s not; if it is, the mechanism is either realized in the world, or it’s not. But that’s not what I’m concerned with here.

And I would have no issue with that, were it not in a thread whose very title is a scientific claim.

Nor is it then relevant to any claim about putative consciousness in AIs.

Other than I would also have no issue. But also, no offense, any interest.

Well, there are lots of people who would disagree with you on that, but I suppose that’s philosophy for you. Also, of course, there’s more to this thread than its title, and one sort of assumes that people will take things in context. But that assumption has let me down before…

Well, there’s really two issues here. One is that the model as is is explicitly uncomputable, and hence, not achievable by AI. That in itself doesn’t mean no computable model is possible, of course—although any argument trading in this mere lack of refutation teeters close to arguing from ignorance: if two roads diverge in yellow wood, and we know one leads to our destination, and don’t know whether the other one does, that’s hardly an epistemically balanced situation.

But as part of the background for the model, I also develop an argument that computation can’t lead to consciousness in the general case—essentially, because computation itself depends on the capability of interpreting symbols, which is however part of the repertoire of abilities of the mind computation putatively explains: which is just the homunculus-problem. If that argument is right, then there is no consciousness that arises from computation, no matter whether my model works. This, too, is not a scientific argument—obviously. Neither is the argument that computers can’t produce random numbers, or can’t decide whether a set of matrices can be multiplied together to yield zero. But that doesn’t change anything about its impact.

And those hypothetical people are free to make their point and explain their argument here.

Sure, but the title is either consistent with the claims in the OP, or it isn’t. Which is it?

Right now it appears that you wish to make a bold claim, but when pressed on how we might test that claim, it becomes the far weaker claim of your model being “compatible with” naturalism.

I don’t mean all this to sound too antagonistic. People use flashy but slightly misleading titles all the time, probably I have.

Well first of all I think your ?philosophic/metaphysic? definition of computation is very different than any meaningful real world one.

Second, I am not convinced your road lead to any destination honestly. It may be that the meta phenomena of consciousness arises from the process you describe, or one similar, or one different. I’d require some evidence to actually consider the case.

Third returns to your, to my mind, peculiar definition of computation. No, computation does not require interpreting symbols, even if our minds do such at a consciousness level. Computers do computation all the time. In a sense an individual neuron is computing as it adds up various weighted inputs and fires or does not as a result. Assemblies of cells then compute based on various inputs back and forth from other assemblies of cells. At some level those dynamic computations, the processes, are experienced as symbols and consciousness arises. Else you believe in the ghost in the machine.

Simply put I do not see any reason to believe your argument here is right.

The point is that you’re appealing to a historically contentious philosophical point, that claims about consciousness are scientific claims, without offering any argument to that effect, seemingly taking it for granted. Unfortunately, most of the people who have offered arguments to the contrary are thus long dead (although their consequent appearance in this thread would itself go a long way to substantiate their position, in some cases). More modern arguments are Chalmer’s philosophical zombies and Jackson’s tale about Mary.

My own argument has been given in this thread (and in the paper, and in the explanatory posts on 3QD):

Essentially, physics—or science in general—gives us the structure of things, but not their ‘inner nature’. So, claims about what conscious experience is, aren’t scientific claims, simply because the object of science is that which can be modeled—and what can be modeled is just structure: indeed, modeling is the recreation of a particular structure within a distinct substrate (think of constructing a Lego model of the Eifel tower, for instance).

I don’t see what claim you think I’m making. As the OP notes, what I claim to offer is

In other words, I claim that I have a bit of mathematics that models how symbols acquire meanings, which leads to undecidable questions as a consequence, the solution of which can’t be captured in any formal model, since any model is limited to the structural level. This yields a theory of consciousness on which it is furnished by self-representational access to intrinsic properties. So what claim do you think I’m making that I’m backing off from? Is it just the ‘is’ in the topic title you object to? But this is Great Debates, so isn’t it usual to offer up the topic under debate as a title? Should I preface it with ‘Resolved:’?

I don’t know what a ‘meaningful real world’ definition of computation might be, but it’s a particular species of implementation relation that has a long pedigree, generally known as ‘mapping’ accounts, which associate steps of some formal process (giving the actual ‘computation’) with states of a physical system in its evolution (i.e. the computer). In the article, and also in this thread and this essay, I give a detailed argument that there is always an ambiguity in what computation is thus mapped to a physical system, and that this ‘implementation relation’ is indeed a matter of choice—thus, the same system can be used to compute different functions by different users. Hence, computation is not an objective notion: there’s no objective (‘scientific’, if you like) fact of the matter regarding what a given system computes; it needs an act of interpretation to pin down that computation. (This line of argumentation also has a long philosophical pedigree, originating with Putnam (see the SEP article)—ironically, the philosopher who had originally proposed the computational theory of the mind in the first place.)

Whether my argument works is of course the topic under discussion: I have presented my reasoning, you’re free to poke holes into it—or, if you can’t find any, accept it. As for ‘evidence’: what would such evidence look like, in your opinion? What evidence do you need to accept that computers can’t figure out whether a set of matrices can be multiplied together to yield the zero matrix (or any other of a host of problems we know to be unsolvable by computational means)?

That’s a physical process, not a computational one. A basin may be said to ‘add up’ streams of water and then overflow if the total ‘sum’ exceeds a given volume, but of course, it doesn’t bother with any adding—it just overflows if it can’t hold the water anymore. There is a substantial question how such physical processes are connected to the formal process of computation—indeed, that question largely parallels that of how brain processes are connected to mental ones. That’s after all the attraction of the computational picture: in computers, the concrete seems to have a way to reach into the abstract; thus, making the brain a computer might adduce this capacity, too. But this really gets things the wrong way around: the capabilities of computers really piggyback on the interpretational capacities of the minds of their users, not the other way around.

Consider the following simple physical device, consisting of two ‘input’ wires and one ‘output’ wire. If both ‘input’ wires are at a voltage above 5V, then so, too, is the ‘output’-wire. What function does this system compute?

Well, there’s no unique answer to that question: we might consider ‘high voltage’ to mean ‘1’, and ‘low voltage’ to mean ‘0’: in this case, it computes the logical ‘and’ function (and we can actually use it to compute that function). Or, we might consider ‘high’ to mean ‘0’, and ‘low’ to mean ‘1’: in this case, it computes the logical ‘or’, and again, that’s how we could actually use it. So there’s no real fact of the matter regarding whether it computes one function or the other, it’s just a matter of interpretation. This generalizes to more complicated functions readily.

But there’s in fact a further instance of subjective choice in the example, which is singling out that particular level of behavior of that particular physical artifact as our ‘computer’, or our substrate for computation. We might look more closely and see that there’s more detail to its behavior: the output could be between 0V and 2.5V or between 2.5V and 5V depending on the input, so that the ‘low’ state is actually two states we had lumped into one. Now, the device can be seen as computing a function from three input to three output values (and again, which one will be open to choice).

So what the system computes is a matter of singling out and lumping together behaviors. But this singling out and lumping together is itself just a function of how we conceptually slice up the world.

Except of course I’ve given a detailed theory of how this arises without any ghost in the machine, albeit in a non-computable way—as it must: to explain the emergence of computation in terms of computation is just circular.

As is your prerogative, of course. But as of yet, you haven’t really engaged with my argument, but simply tried to find proxy reasons to discard it out of hand—‘not falsifiable’, ‘ghost in the machine’, ‘no evidence’. It’s of course hard work to engage with unfamiliar topics on their own terms, but I find that whenever I think I have a simple answer to a complicated question—such as ‘only falsifiable notions deserve my attention’—then what’s really at the root of this is me trying to shift the cognitive burden of actually engaging with the ideas as presented.

Apologies in advance as I fear that this response may come off as snarky or even rude, but I don’t see any way to state my thoughts without that risk.

I honestly don’t see anything to poke holes at because I read nothing that actually substantiates the claim. The how the experience of consciousness emerges out of the process you describe, or why, is not there to my read, just a claim that it does. The basic argument reads to me as a trite tautology, even if it is one with a long pedigree within philosophy.

You define computation as something that is done with symbols, that have meaning attached, something that requires conscious minds, and minds emerge from the handling of symbols. But then are also non-computable. And that you are supposedly explaining how symbols, which you are telling me have to have meaning to be symbols, acquire what they have to be what they must already have to be symbols.

There certainly are many people who are interested in understanding conscious minds in manners devoid of any evidence, who believe that understanding consciousness is outside of scientific claims, that science is the wrong approach for understanding how minds come to be … I could go to many places of worship to find them and to hear those discussions from the most fundamentalist of many faiths.

They like you are entitled to have their thoughts on that.

Personally however any discussion about consciousness that fails to either provide any evidence or to make any even potentially future testable predictions is not of interest.

My personal interest is not debating whether or not the experience of consciousness somehow is emergent of physical processes, but how it does, how it evolved, how it develops in individuals from our prenatal nothingness through newborns to adults, and how we would potentially recognize it in something other.

To the latter I find being able to give outputs that look or sound like a human mind to be a very insufficient answer. Adequate behavioral output does not to me prove the internal experience is present, AND there may in this universe be forms of consciousness that have evolved to meet very different demands than ours have and therefore look very different that ours. It may be able to solve and respond to problems very different than I can that I cannot even appreciate are problems to solve, but unable to do things that I consider simple.

I don’t even have to go off the planet for that to my best guess. I strongly suspect that large cetacean brains are solving problems managing location in large volumes of three dimensional volume coupled with various values attached to dynamic objects in that space that are completely beyond any human brain to manage. And FWIW if not being able to various sorts of matrices is evidence of lack of consciousness then I have none too!

To me a model of consciousness potentially helps us in many ways. It offers promise of understanding normal brain function, how that function comes to be, and how things sometimes work differently, divergently, or abnormally. It offers promise of recognizing other sorts of consciousness.

I erred in reading the title of the OP and the bump of it and thinking that the model presented might address subjects of my interest. I take responsibility for the error and will move along.

This is the realization I am coming to as well.

I mean… There’s an entire mathematical argument that precisely describes how a self-referential structure comes to refer to something beyond itself, i.e. how a symbol acquires its symbolic properties. You can claim it’s wrong, you can claim it doesn’t interest you, but you can’t claim it’s not there.

I don’t think I do any of these things. Unfortunately, you don’t provide any argument that or pointers to where I allegedly do, so it’s not easy to understand where this comes from. My definition of computation is perfectly ordinary, going back to Turing. The question I’m addressing is how physical systems connect to the formal objects that provide this formalization. This is a substantial question: why should a particular configuration of atoms be connected to, say, a number?

All the rest is just consequence, and argued for in detail, both here and in the further links I’ve given.

Again, appealing to evidence is just a category confusion. Let me ask you again: do you need evidence to be persuaded that computers can’t produce random numbers? If so, what sort of evidence do you think ought to suffice?

And well, that’s what I’ve produced a theory of. You seem to be dismayed that it doesn’t fit into a particular preconceived form you feel such a theory needs to take; but then again, nobody guarantees you that the world should conform to your preconceptions.

I don’t know how that got so garbled. No, the example was just to drive home the fact that not every meaningful proposition is such that it can only be decided on an evidential basis, and that dismissing such an argument on the basis that it lacks falsifiability is like dismissing the plans for constructing a submarine on the basis that it’ll never fly: simply a completely misguided criterion.

Sure, but in order to get to that point, there’s lots of groundwork that needs to be done. There are conceptual issues to the scientific study of consciousness that make it unique—I can’t put your conscious experience under the microscope, for instance. These issues need to be clarified, or we risk starting out from misguided assumptions (say, about the nature of computation), which will then lead to shoddy science.

If the model as I’m proposing it is sound (big if, I grant), then one can think about ways how it might be implemented in physical brains. I have previously (if without eliciting much response) pointed to the reentrant activity between thalamus and cortex as a possible realization of the self-replicating patterns my model necessitates—these would then be the neural correlates of conscious experience, and one could for instance study whether they persist in anaesthesia, which would offer evidence against that hypothesis. One could also look for functionally equivalent patterns in non-human brains, and propose it as a possible signature of consciousness.

These would be proper, scientific hypotheses that are readily falsifiable; but they would need to be offered by somebody with a much stronger background in neuroscience to be at all credible. And before that could happen, the model has to reach a stage of maturity that at least means it’s not obviously nonsense; and that’s the point I’m at right now.

If that’s not something that interests you, that’s obviously perfectly fine—we have all different interests, after all. Although I’m not completely sure what to do with these declarations of disinterest—it’s not like I forced anybody to read, much less respond. As I said above, this is just an offer: there are, after all, people to whom the philosophical questions surrounding conscious experience are of interest. If you’re not among them, then well, that’s that.

No I am disputing that that is a philosophical point. Obviously consciousness is a phenomenon so mysterious that much of it still lies within the realm of philosophy.

But “Consciousness *is* <name_of_model>” is a declarative claim about the whole phenomenon.

It’s like if I said “Entanglement is K”, where “K” is my new model for said phenomenon. If someone were to ask me how to test my model, or what predictions it makes, or heck, why I believe I have a solution, I can’t throw up my hands and say that that question is itself a superposition of states. Because a superposition is a characteristic of the phenomenon being explained, it’s nothing to do with the assertion of my model’s explanatory power.

Similarly, the fact that consciousness is subjective, and, as I say, some aspects remain purely philosophical, doesn’t mean that we cannot evaluate claims being made about (all of) consciousness.

Okay, I was so far off that I misidentified the main idea. Let me try again… I’m still struggling to wrap my head around your example, which is the conscious experience of intentionality in a command that has been read.

For instance, the words ‘turn your head to the left’ facilitate the selection of a certain replicator that would, if put into the control seat, indeed cause your head to be turned to the left; but that, moreover, knows this about itself.

I read the words, ‘turn your head to the left’ and am aware that the words ‘your head’ refer to my own head. I can only comply with the command by physically turning my head to the left because I am conscious that the words refer to my head (intentionality). We also assume sensory experiences can be modelled by mental symbols (representationalism). Finally you assume a monism of the physical substrate (physicalism).

* I don’t see how an assumption of physicalism is necessary for or relevant to your model.

You claim to model an agent’s conscious experience (in this case, my awareness) of intentionality with a number of coexisting von Neumann automatons. Each von Neumann automaton contains instructions sufficient to create a copy of itself, as well as the ability to perform calculations using the instructions. This, you argue, amounts to self-awareness. As for environmental or external awareness, if an automaton ‘evolves’ so that its shape reflects the environment, then self-awareness of its shape amounts to awareness of the environment.

#include <stdio.h>
char * s = { '0', '}', ';', '\n', '\n', 'c', 'o', 'n', 's', 't',' ', /* etc. */, 0 };

const char * DATA = { /* arbitrary */ }; // this is the 'shape' of the automaton

quine(fn)
char * fn;
{
    FILE *fp;
    int i;

    fp = fopen(fn, "w");
    fprintf(fp, "#include <stdio.h>\nchar * s = { ");
    for(i = 0; s[i]; i++)
        fprintf(fp, "%d, ", s[i]);
    fputs(fp, s);
    fclose(fp);
}

bootstrap(fn)
char *fn;
{
    /*
     * compile and run intermediary program which,
     * after a short delay, compiles and runs the program
     * from source file fn
     */
}

main(void)
{
    int obsolete = 0;
    while(!obsolete)
    {
        // do things, make changes to s as necessary
    }
    quine("s.c");
    bootstrap("s.c");
}

For example, according to your model, at least one automaton will evolve a new shape which represents my sight of the words ‘turn your head to the left’. This particular automaton is aware that I see the words because my sight is part of the automaton’s shape, and it has access to a blueprint of its own shape. We know the shape of this automaton evolves rapidly - at least as rapidly as documented human perception, 30+ images per second.

I don’t understand why the automaton needs to be able to prove any arbitrary theory of itself in order to decide whether or not to replicate. All you have to prove is that the symbol theory used in the model for self-representation (the instructions or blueprint, or in this case, a global string of its source code) is capable of encoding any arbitrary information you are attempting to model. I see no reason the automaton itself must have the capability of proving this point, or proving that replication will be successful before attempting it. In computer science, a compiler usually performs a series of heuristic tests (i.e. syntax checks) before compiling a source file, but these are strictly optional. Compare with genetic and epigenetic cancers in biological cells. Why shouldn’t von Neumann automata be subject to similar diseases? If I recall correctly, mutations affecting the universal constructor for the original cellular automata are usually fatal.

[...]

checkvin(void)
{
    int i, dirty;
    for(i = 0, dirty = 0; i < sizeof(DATA); i++)
    {
        char c = fgetc(vin); // I'm imagining a visual peripheral here
        if(c == EOF)
            break;
        if (DATA[i] != c)
        {
            s[28 + 3 * i] = c; // the s[26] is the '{' in "DATA = { "
            dirty = 1;
        }
    }
    return dirty;
}

main(void)
{
    int obsolete = 0;
    while(!obsolete)
    {
        obsolete |= checkvin();
    }
    quine("s.c");
    bootstrap("s.c"); // compilation includes heuristic tests
}

Moving on, the automaton wouldn’t necessarily know that the words were words as such, as opposed to a meaningless arrangement of light and dark. Even if it could recognize words from noise, it wouldn’t necessarily know the particular meaning of a particular word, i.e. the difference between ‘head’ the body part and a ‘head’ of cabbage. If the only thing encoded is visual sense, there won’t be any notion of meaning and therefore no awareness of intentionality. This can be remedied by having the shape of the automaton reflect not only visual sense, but also a list of rules by which internal representations of external symbols can be recognized and translated into mental symbols - which can be modelled with something as simple as a binary search algorithm. The development of this list of rules would then be your analogy to language recognition and acquisition.

[...]
/*
 * anything fixed at compile-time is part of the 'shape' of the automaton
 * this includes algorithms and global constants, for example
 */
const char * VISUAL_DATA = { /* arbitrary */ };
const char * LANGUAGE_DATA = { /* arbitrary */ };

[...]

extract(img, o, olen)
const char * img;
char *o;
{
    /*
     * Computer vision / text recognition algorithm
     * recognized text is written to o, zero terminated
     * returns nonzero if text is written to o
     */
}

translate(i, o, olen)
const char * i;
char * o;
{
    /*
     * Translates external-language (i.e. English) string i to mentalese
     * i is assumed zero terminated, o is guaranteed zero terminated
     * returns nonzero if mentalese is written to o
     */
}

readtext(img)
const char * img;
{
    char *t[30], * m[60]; // arbitrary buffer length
    if(extract(img, t, 30) && translate(t, m, 60))
    {
        fopen(mout, "w"); // I'm imagining a mentalese output stream
        fputs(m);
        fclose(m);
    }
}

main(void)
{
    char canvas[sizeof(data)];
    int obsolete;

    for(i = 0; i < sizeof(data); i++)
        canvas[i] = data[i];
    preprocess(canvas);

    obsolete = 0;
    while(!obsolete)
    {
        postprocess(canvas); // for example, fill in blind spots (optical illusions)
        readtext(canvas);
        obsolete |= checkvin();
    }
    quine("s.c");
    bootstrap("s.c");
}

We have a couple problems. First, most people read much slower than they can see words. Second, the split brain experiments showed that visual processing is physically distinct from language processing - patients knew they saw a word with their left eye, but weren’t aware which word it was. A model merging the two functions in a single automaton is therefore inadequate.

So you have to model one automaton that captures (in its shape) the visual experience of the left eye and language recognition, and a separate automaton that captures the list of rules pertaining to language processing. The two have to operate asynchronously to reflect the reality that reading is slower than seeing.

I’m thinking of a channel for communication from the sight automaton to the language automaton - severed in split-brain patients. The language automaton polls the line, or specific parts of it corresponding to the center of the visual field, less frequently than the sight automaton updates its status. By doing so the language automaton captures visual data in its shape. A savant who reads the whole page in an instant might be modelled with a language processing automaton that captures the entire visual field.

// Left eye automaton
[...]
main(void)
{
    char canvas[sizeof(data)];
    int obsolete;

    for(i = 0; i < sizeof(data); i++)
        canvas[i] = data[i];
    preprocess(canvas);

    obsolete = 0;
    while(!obsolete)
    {
        postprocess(canvas);
        if(extract(canvas))
            forward(canvas, tout); // tout is an output stream of uninterpreted text
        obsolete |= checkvin();
    }
    quine("s.c");
    bootstrap("s.c");
}
// Language processing automaton
[...]
const char * LANGUAGE_DATA = { /* arbitrary */ };
const char * TEXT_DATA = { /* corresponds to tout above */ };
[...]
readtext(void)
{
    char m[60]; // arbitrary buffer length
    
    if(translate(TEXT_DATA, m, sizeof(m))) // slow!
    {
        fopen(mout, "w"); // I'm imagining a mentalese output stream
        fputs(m);
        fclose(m);
    }
}

main(void)
{
    int obsolete = 0;
    while(!obsolete)
    {
        readtext();
        obsolete |= checktin(); // corresponds to tout above
    }
    quine("s.c");
    bootstrap("s.c");
}

This all seems consistent enough to me, though as usual I’m not sure if I’m getting it, and I definitely don’t understand the need to set up the model this way. If an automaton is definitely being replaced, why have it construct a child ex nihilo rather than perform self modifications directly? In software that would be editing the loaded program in memory rather than recompiling from source. For a physical machine it would be making changes to the existing machine rather than building a new machine, or at the very least, recycling parts. It seems to me you can have representationalism, intentionality, and physicalism, without automata that replicate themselves wholesale, and without running into any issues based on what the automaton can or can’t prove.

If you’ll entertain the notion of a Chinese room rather than a mechanical or abstract automaton, it’s easy to see there’s usually no pressing reason to build an entirely new room rather than modify the instruction booklets and go from there. Knowledge of the external world is still indirect and intrinsic to the Chinese room. I just don’t see how the capability to prove a theory or verify a blueprint is strictly necessary.

~Max

What is the basis for this statement?

If I have a machine which, by means of weights and pulleys, will distribute a weight of gold equal to the square root of the weight of its operator, isn’t it necessary for the machine to compute a square root? The machine determined the amount of gold to release, and to compute is literally to determine an amount (or number). But neither the machine nor its operator needs to interpret any symbols. A goat which happens to trigger the machine would lack the capability of interpreting symbols (which symbols aren’t present anyways), yet the machine still works.

Just because you cannot identify a unique function out of two sets of two identical functions*, it does not follow that the system does not perform computations. You could even conclude that the system necessarily computes a logical function.

* The identical functions are:

A \land B \leftrightarrow \neg A \lor \neg B
A \lor B \leftrightarrow \neg A \land \neg B

~Max

I still don’t get the point you’re making. Evaluating such a claim is the very reason for this thread? I offer up a thesis for debate, that makes an assertion about consciousness. You can accept that thesis or, if not, attempt to rebut it. How does that entail that we can’t evaluate claims made about consciousness?

Best guess is that you think that saying ‘consciousness is…’ somehow commits me to some empirical claim or another. But clearly, that’s not the case. ‘Composite numbers are products of prime numbers’ doesn’t make any claim you could go out and falsify. Only if conscious experience were something amenable to empirical investigation would such a commitment follow. But that’s the contentious philosophical thesis I was talking about, that’s in fact false on the model I’m proposing. So how can you take me to task for not doing something that the thing I’m arguing for explicitly claims can’t be done? That doesn’t really leave me a whole lot of outs: if I do it anyway, then I lose, because the model is wrong. And if I don’t do it, I lose, because you get to thumb your nose at the lack of empirical relevance!

Also, I don’t get what you’re driving at with the ‘entanglement is…’ example. ‘Entanglement is the inability to decompose a state vector into a tensor product of local state vectors’. Does that now entail any falsifiability? Do I have to present evidence for this? It’s just a mathematical property of a certain theory, namely, quantum mechanics. That theory might or might not describe anything about the real world, but even if it doesn’t, the proposition is perfectly meaningful.

Likewise, on my model, consciousness is self-referential access to intrinsic properties. Does the model apply to reality? I don’t know; but that doesn’t detract from the fact that it’s a perfectly meaningful proposition.

Can you put forward an example of a conscious experience that the proposed model does not fit?

Implicit is the assertion that if the model does not fit, it is not a conscious experience (specifically, conscious of intentionality). Either that or the model is flawed. That’s where your explanatory power and falsifiability are found.

I tried a couple upthread (asynchronous piano playing, split brain), which lead to a more refined understanding of the model and a positive personal experience in this discussion thus far.

~Max

It’s the other way around: my broader metaphysical outlook is physicalist, because I think most other options suffer from fatal problems. So I have an interest in showing that physicalist metaphysics can account for notions historically viewed as being problematic for it, such as intentionality. Hence,y development of this theory.

Not quite, if I understand you right. Rather, intentionality is produced by a replicator’s awareness of its own intrinsic properties—so it’s conscious experience that produces intentional inexistence, by essentially overcoming the people’s of structural underdetermination (by adducing a concrete substrate within which the abstract structure is realized).

Because it needs to ensure that the reproduction (or modified version of itself—structurally, both are equivalent) can act adequately to bring about whatever goal is being pursued. To do so, it must be able to prove only things that are actually true, so as not to ‘believe’ it is acting in accordance with a certain goal, while in fact acting in opposition to it. But proving this of an automaton with reasoning capabilities equal to itself is impossible, by Löb’s theorem. Hence, the automaton would either need access to a stronger theory—which would mean that the reasoning capacities of each successive generation of automata would become progressively weaker—or it needs to leapfrog that problem. This ‘leapfrogging’ can be accomplished formally by adducing an explicit model of the domain (the automaton itself and its theory), which represents the automaton’s access to its intrinsic properties.

I don’t understand why you would make these assumptions. Whatever reaches the replicator is just patterns of neuronal activity that are correlated with the environment through the various sensory modalities. There’s nothing about light and dark, etc., in there, just patterns of neurons firing. This has a certain structure (it’s not random). This structure is ‘painted in’ by the replocators’ self-awareness of their intrinsic properties. It’s those that are present to us in consciousness, furnished with the aboutness imbued upon them by the fixed-point property.

Think of intrinsic properties as the Lego pieces that the replicator arranges into a model of the sensory data, with the fixed point property furnishing labels regarding what the parts of this model refer to.

All of this must in some way be reflected in patterns of neural activity. Those are reflected by the models furnished by the replicators. So I don’t see any particular problem; however, it’s also important to note that my model is perhaps best viewed as a toy model: something that can do what we need to account for intentionality, but which is unlikely to be precisely in this form realized in human brains. Think of it mostly as a proof of principle, perhaps.

Logically, there is not much difference between the two. The model originated in the structural similarity between the notions of ‘meaning’ and ‘construction’: just as a symbol S means s to a symbol-user U, a blueprint B facilitates the construction of b by a universal constructor C. Thus, a symbol meaning something to itself—and thus, eliminating the homunculus—is analogous to a constructor constructing itself.

Well, without interpretation, you simply don’t know what’s being computed. Think of a pocket calculator adding two numbers: you input ‘4’, ‘+’, ‘5’, ‘=’, and it outputs ‘9’. In order for that to be the computation known as ‘addition’, those symbols have to have a specific meaning, referring for instance to numbers rather than cats. Thus, only upon properly interpreting these symbols do we obtain any specific computation at all.

If you have a water basin that overflows upon a certain volume being exceeded, does this basin compute the volume inside? I think such a conception of computation just collapses the notion to that if mere physical evolution, which would make it trivial.

But two functions are not identical just because they are isomorphic. And and or are logically distinct operations. Anyway, this is just a simplistic example; the one I give in the paper is more to the point.