Those are all good points, thank you. They are certainly not points that can be readily dismissed, and yet, I wonder if they’re necessarily fatal obstacles to the emergence of artificial consciousness. I have several reasons for believing that they may not be.
For instance, even if a general reasoning agent (like AIXI) has been shown to be uncomputable in the general case, we can still posit the possibility of sufficiently close approximations to do essentially the same thing. Likewise, even if it is indeed true that quantum processes play a role in cognition (and that is by no means established) it still remains an open question whether these are essential or just optimizations. Even if for some reason they’re real and they’re essential, there is nothing to say that they couldn’t be approximated computationally. And finally, one could posit that some form of intentionality could emerge in future AI systems that might well seem strange in human terms, but is “good enough” to drive novel emergent behaviours.
On a bit of a side note – and not just a response to you but of possible interest to all – there’s an important paper reviewing the capabilities and potential of language models: Beyond the Imitation Game: Quantifying and Extrapolating the Capabilities of Language Models [PDF]. It’s specifically in the context of a large set of benchmarks (called BIG-bench, for “Beyond the Imitation Game”) for assessing the capabilities of these rapidly evolving AI systems. The paper is from last June, which is practically forever on the scale of how fast these systems are improving, but still very informative.
I found this part particularly relevant on the topic of emergence:
Quantity has a quality all its own
Massive increases in quantity often imbue systems with qualitatively new behavior. In science, increases in scale often require or enable novel descriptions or even the creation of new fields (Anderson, 1972). Consider, for instance, the hierarchy from quantum field theory to atomic physics, to chemistry, to biology, and to ecology. Each level demonstrates new behavior and is the subject of a rich discipline, despite being reducible to a bulk system obeying the rules of the levels below.
Language models similarly demonstrate qualitatively new behavior as they increase in size (Zhang et al., 2020e). For instance, they demonstrate nascent abilities in writing computer code (Hendrycks et al., 2021a; Chen et al., 2021; Austin et al., 2021; Schuster et al., 2021b; Biderman & Raff, 2022), playing chess (Noever et al., 2020; Stöckl, 2021), diagnosing medical conditions (Rasmy et al., 2021), and translating between languages (Sutskever et al., 2014), though they are currently less capable at all of these things than human beings with modest domain knowledge. These breakthrough capabilities (Ganguli et al., 2022) have been observed empirically, but we are unable to reliably predict the scale at which new breakthroughs will happen. We may also be unaware of additional breakthroughs that have already occurred but not yet been noticed experimentally.