The Miselucidation of Whack-a-Mole

Without getting into another extensive back-and-forth about what now seem to be largely differences of opinion about the future of AI, I just want to highlight this part where I believe you’re misrepresenting my statements, essentially doing what you’re accusing me of. As you say, this is indeed the Pit, so if I wanted to insult you, or to impugn your character as you suggest, I wouldn’t need to be coy about it. I have no wish to do so and I would think that would be perfectly clear.

I didn’t equate you with the misguided skepticism of Dreyfus or Searle; what I was trying to say was that some of your dismissive statements “effectively” align with their skepticism; I’m thinking of things like this, which followed on to your dismissal of GPT as nothing more than a stochastic parrot:

There are two questions here, so let me deal with the first one first. If you’re in any way suggesting any sort of comparison between Eliza and the latest generation of LLMs, that’s just ridiculous. They’re not remotely comparable. It’s like comparing a 1930s era hand-cranked calculator with a modern supercomputer that can play grandmaster level chess or score better than the vast majority of humans on intelligence tests, on the basis that both machines do computations so they must be qualitatively the same. That the transformer model provides a novel approach to neural networks is largely just a matter of efficiency, allowing sequences of tokens to be processed in parallel rather than sequentially, and is only one of a great many factors that make LLMs so powerful, not the least of which is the massive computer power behind them.

I played with Eliza a bit when it was first released to the public in the late 60s, and I have it still (only due to its interest as a historical artifact) on the SimH PDP-10 simulator on my PC. In fact I just fired up my virtual PDP-10 and ran the original Eliza program and posed the same question to it that I had earlier posed to ChatGPT 3.5. Here are the two conversations.

FYI, in the Eliza dialog, “WORKER” is the name of the LISP function that starts the program.

Eliza:

.r eliza

(WORKER)
HELLO.

HOW DO YOU DO. PLEASE STATE YOUR PROBLEM

IS LANGUAGE FUNDAMENTAL TO HUMAN COGNITION?

I AM NOT SURE I UNDERSTAND YOU FULLY

IT’S A PRETTY STRAIGHTFORWARD QUESTION

PLEASE GO ON

IS LANGUAGE INTRINSIC TO INTELLIGENCE?

WHAT DOES THAT SUGGEST TO YOU

IT SUGGEST TO ME THAT YOU’RE AN IDIOT.

WHAT MAKES YOU THINK I AM AN IDIOT

ChatGPT:

Is language fundamental to human cognition?

Language is often considered fundamental to human cognition, and it plays a crucial role in human thought and communication. While it’s not the only cognitive tool humans possess, it is one of the most distinctive and powerful ones. Here are some reasons why language is viewed as fundamental to human cognition:

  1. Complex Communication: Language allows humans to convey and share information with great precision and complexity. It enables us to express abstract ideas, discuss the past and future, and convey emotions and desires. This level of communication is unparalleled in the animal kingdom.

  2. Thinking and Problem-Solving: Language is closely tied to thinking and problem-solving. It allows us to formulate and manipulate abstract concepts, plan for the future, and engage in logical reasoning. Without language, many complex cognitive tasks would be severely limited.

  3. Social Interaction: Language is a key tool for social interaction. It enables us to establish and maintain relationships, cooperate with others, and transmit cultural knowledge across generations. It plays a significant role in human culture and society.

  4. Learning and Education: Language is essential for learning. It is the medium through which we acquire knowledge, transmit information, and educate future generations. Without language, the process of teaching and learning would be drastically different.

  5. Memory and Thought Organization: Language aids memory and thought organization. We use language to label and categorize our experiences, making it easier to remember and make sense of the world around us. It helps us create mental models and narratives.

  6. Self-Awareness: Language contributes to self-awareness and introspection. It allows us to articulate our thoughts and feelings, reflecting on our own experiences and motivations. This capacity for introspection is closely tied to our ability to use language.

While language is fundamental to human cognition, it’s important to note that it is not the sole cognitive tool humans possess. Humans also engage in non-linguistic forms of cognition, such as visual and spatial reasoning, mathematical thinking, and emotional processing. Additionally, there are individuals with language impairments who still demonstrate sophisticated cognitive abilities, suggesting that other cognitive mechanisms can compensate to some extent for the absence or limitations of language. Nevertheless, language remains a central and distinctive feature of human cognition.

Moving on to your question about why I’m impressed by GPT, the above example is a small part of the answer, but I want to stress that I am not – as some might suggest – simply bamboozled by the authoritative-sounding language. Yes, GPTs can be hilariously wrong – sometimes about apparently simple matters – but they are right much more often, as evidenced by the extensive list of advanced tests for knowledge, intelligence, and problem-solving skills that they have aced. I listed some of those tests above (and it’s only a partial list). It’s not anything in particular about the architecture per se that impresses me, it’s the end result.

Yet another skill that I found impressive was in connection with a question designed to test a person’s financial numeracy that was discussed in another thread. The question itself was relatively simple: given a table of credit cards organized into two categories based on whether the cardholder pays off balances in full each month or carries a balance, describe the differences between these two categories. The impressive part was that there was no easy way to convert the table into a well-formatted question for ChatGPT; the table was an unholy mess, with everything misaligned. A human would have had a hard time with it. ChatGPT interpreted it perfectly, demonstrating a powerful utility in processing unstructured data, something that computers traditionally struggle with.

Yet another example is its ability to do language translation with an apparently deep understanding of real-world phenomena and nuanced meanings in both the source and target languages. In the early days of AI, researchers despaired about how to convey contextual understanding to machine translation; LLMs appear to do it effortlessly.