Conscious experience is self-representational access to intrinsic properties

I will disagree. The context is pulling your response to the following to here:

By claiming

To pull that discussion here and then claim it is the wrong place for it is mistaken I believe.

I will argue that Einstein was wrong, had it exactly opposite, and was the lesser for it.

His philosophical underpinnings made him not accept a perspective supported by the evidence that did not fit his philosophy, simplistically summarized: “god does not play dice.”

The “instrumentalists”, devoid of being anchored to an extant philosophy, were able to describe a reality beyond their ability to fully imagine, beyond even Einstein’s imagination.

To bring this to your model and putative AI awareness … yeah I am on the side that making falsifiable predictions is important.

A model should at least be able to make falsifiable predictions about correlates of consciousness in humans.

It is also important to decide where the burden of proof for consciousness in an other (be it AI, other species, or completely alien) should lie. And what level of proof. I don’t have a good answer for that and have not heard any either.