The next page in the book of AI evolution is here, powered by GPT 3.5, and I am very, nay, extremely impressed

It does lead to differences, though. The set of uncomputable functions is vastly greater than that of computable ones, and each of them can be written in terms of a computable one together with an infinite algorithmically random sequence. Thus, access to such a sequence allows a computer to answer questions (such as the halting problem, or the question whether a quantum system has a gapped spectrum) that no unaugmented computer could answer. Such a system could also implement the AIXI-agent, which I think has the best claim to being a general-purpose intelligence among the proposals I’m somewhat familiar with.

This has real consequences. Consider a cop trying to catch a robber that hides in either of two houses. In each round, each can choose to either switch or stay. The robber is caught if both end up in the same house. It’s clear that a random strategy, for the cop, will always (with probability one) lead to success in the long term.

But now suppose that both the cop and robber are limited to computational means. Then, there always exists a strategy for the robber to evade the cop indefinitely: simply implement whatever algorithm the cop follows, and anticipate their moves.

So the difference between randomness and no randomness, in this case, is that the game transforms from a certain win for the cop on the long-term limit, to a guaranteed win for a robber with the right strategy. Thus, the properties of the game change with the addition of randomness.

Likewise, in a computable world, questions regarding quantum measurement outcomes, or whether there is an energy gap in the spectrum of certain systems, would in principle be answerable. But the former stands in a certain tension with the various no-go theorems of quantum mechanics, which prompted Feynman to claim that no classical universal device could represent the results of quantum mechanics. This probably needs some qualification, because Feynman likely believed in von Neumann’s no-go theorem that is now widely regarded as erroneous, but one can still prove that either the hidden variable model is noncomputable, or must lead to superluminal signaling. Consequently, the attempt to simulate it on a classical computer would indeed yield manifest deviations from what we believe to be the case in the real world, namely the possibility of superluminal information transfer.

That gets the burden of proof the wrong way around. You claim that consciousness can be instantiated computationally; to oppose this, I only need to show that it’s possible that this isn’t the case, hence leaving this instantiation an open question. I don’t have to show that it can’t be computationally instantiated. If you want me to cross a bridge, just arguing that it’s possible it won’t fall down isn’t going to move me much.

Regardless, I actually have provided several arguments against the idea of computationalism. I have pointed to AIXI, which is an attempt at a general problem solving agent that turns out not to be computable; I’ve pointed to the uncomputability of quantum mechanics and its possible relevance for cognition; and I have pointed to my own theory that runs into undecidable questions. I have also argued against the possibility of consciousness via computation more generally.

So I’m not just coming up with this out of the blue. In fact, I started out as a fervent supporter of the computational thesis, and I still definitely see its attraction. It just turns out not to work.

How do you figure? It seems people are quick to throw around these supposedly ‘obvious’ truths, without really bothering with any argument I can discern. Well, it was obvious to people for a long time that something as complex as the eye or a cell could only come about due to ‘some kind of concept formation somewhere’, but that just turned out to be wrong.

On the other hand, I go through the trouble of formulating a proof that, no, there is no concept formation going in anywhere.

But quite apart from that, given what we know about how ChatGPT works, I just don’t see where one might believe concepts to come in. We know exactly what it knows about any given word: what words it typically occurs together with, what position in the text it has in the given case (which together yields its encoding), and what other words are strongly influential on it (the ‘attention’-mechanism). These are all data available about a text without paying its meaning any mind; I could harvest them about texts in Arabic without any idea of their meaning, any concept of what the words refer to. All I would know is that these squiggles often occur next to those ones, that this particular squiggle is the third in a sequence, and that if I jiggle those squiggles, that squiggle is also likely to change.

How does that give me a concept of what the squiggle means?

Again, that we don’t know what precisely happens doesn’t entail that we can’t put any limits on what can happen. Emergence isn’t magic; the base either supports certain phenomena, or it doesn’t, and if it doesn’t, no amount of piling on more of it will lead them to pop into being from nothing.