Conscious experience is self-representational access to intrinsic properties

The point is that you’re appealing to a historically contentious philosophical point, that claims about consciousness are scientific claims, without offering any argument to that effect, seemingly taking it for granted. Unfortunately, most of the people who have offered arguments to the contrary are thus long dead (although their consequent appearance in this thread would itself go a long way to substantiate their position, in some cases). More modern arguments are Chalmer’s philosophical zombies and Jackson’s tale about Mary.

My own argument has been given in this thread (and in the paper, and in the explanatory posts on 3QD):

Essentially, physics—or science in general—gives us the structure of things, but not their ‘inner nature’. So, claims about what conscious experience is, aren’t scientific claims, simply because the object of science is that which can be modeled—and what can be modeled is just structure: indeed, modeling is the recreation of a particular structure within a distinct substrate (think of constructing a Lego model of the Eifel tower, for instance).

I don’t see what claim you think I’m making. As the OP notes, what I claim to offer is

In other words, I claim that I have a bit of mathematics that models how symbols acquire meanings, which leads to undecidable questions as a consequence, the solution of which can’t be captured in any formal model, since any model is limited to the structural level. This yields a theory of consciousness on which it is furnished by self-representational access to intrinsic properties. So what claim do you think I’m making that I’m backing off from? Is it just the ‘is’ in the topic title you object to? But this is Great Debates, so isn’t it usual to offer up the topic under debate as a title? Should I preface it with ‘Resolved:’?

I don’t know what a ‘meaningful real world’ definition of computation might be, but it’s a particular species of implementation relation that has a long pedigree, generally known as ‘mapping’ accounts, which associate steps of some formal process (giving the actual ‘computation’) with states of a physical system in its evolution (i.e. the computer). In the article, and also in this thread and this essay, I give a detailed argument that there is always an ambiguity in what computation is thus mapped to a physical system, and that this ‘implementation relation’ is indeed a matter of choice—thus, the same system can be used to compute different functions by different users. Hence, computation is not an objective notion: there’s no objective (‘scientific’, if you like) fact of the matter regarding what a given system computes; it needs an act of interpretation to pin down that computation. (This line of argumentation also has a long philosophical pedigree, originating with Putnam (see the SEP article)—ironically, the philosopher who had originally proposed the computational theory of the mind in the first place.)

Whether my argument works is of course the topic under discussion: I have presented my reasoning, you’re free to poke holes into it—or, if you can’t find any, accept it. As for ‘evidence’: what would such evidence look like, in your opinion? What evidence do you need to accept that computers can’t figure out whether a set of matrices can be multiplied together to yield the zero matrix (or any other of a host of problems we know to be unsolvable by computational means)?

That’s a physical process, not a computational one. A basin may be said to ‘add up’ streams of water and then overflow if the total ‘sum’ exceeds a given volume, but of course, it doesn’t bother with any adding—it just overflows if it can’t hold the water anymore. There is a substantial question how such physical processes are connected to the formal process of computation—indeed, that question largely parallels that of how brain processes are connected to mental ones. That’s after all the attraction of the computational picture: in computers, the concrete seems to have a way to reach into the abstract; thus, making the brain a computer might adduce this capacity, too. But this really gets things the wrong way around: the capabilities of computers really piggyback on the interpretational capacities of the minds of their users, not the other way around.

Consider the following simple physical device, consisting of two ‘input’ wires and one ‘output’ wire. If both ‘input’ wires are at a voltage above 5V, then so, too, is the ‘output’-wire. What function does this system compute?

Well, there’s no unique answer to that question: we might consider ‘high voltage’ to mean ‘1’, and ‘low voltage’ to mean ‘0’: in this case, it computes the logical ‘and’ function (and we can actually use it to compute that function). Or, we might consider ‘high’ to mean ‘0’, and ‘low’ to mean ‘1’: in this case, it computes the logical ‘or’, and again, that’s how we could actually use it. So there’s no real fact of the matter regarding whether it computes one function or the other, it’s just a matter of interpretation. This generalizes to more complicated functions readily.

But there’s in fact a further instance of subjective choice in the example, which is singling out that particular level of behavior of that particular physical artifact as our ‘computer’, or our substrate for computation. We might look more closely and see that there’s more detail to its behavior: the output could be between 0V and 2.5V or between 2.5V and 5V depending on the input, so that the ‘low’ state is actually two states we had lumped into one. Now, the device can be seen as computing a function from three input to three output values (and again, which one will be open to choice).

So what the system computes is a matter of singling out and lumping together behaviors. But this singling out and lumping together is itself just a function of how we conceptually slice up the world.

Except of course I’ve given a detailed theory of how this arises without any ghost in the machine, albeit in a non-computable way—as it must: to explain the emergence of computation in terms of computation is just circular.

As is your prerogative, of course. But as of yet, you haven’t really engaged with my argument, but simply tried to find proxy reasons to discard it out of hand—‘not falsifiable’, ‘ghost in the machine’, ‘no evidence’. It’s of course hard work to engage with unfamiliar topics on their own terms, but I find that whenever I think I have a simple answer to a complicated question—such as ‘only falsifiable notions deserve my attention’—then what’s really at the root of this is me trying to shift the cognitive burden of actually engaging with the ideas as presented.