The next page in the book of AI evolution is here, powered by GPT 3.5, and I am very, nay, extremely impressed

So what? As already noted in a previous post, ChatGPT understands exactly what this means. This part of its analysis is a matter of basic semantics. Without getting into quibbling about what “understanding” means, the point is that if it was capable of producing images the way its generative LLM produces relevant text responses – which is by no means a stretch – it would produce the appropriate image. This gives no insight into human cognition, or how the latter does or does not differ from ChatGPT.

Except that a longstanding hypothesis has held that thoughts involve the manipulation of symbols, and that the symbols have an internal mental grammar and semantics. Jerry Fodor has called this “the language of thought”, or Mentalese. The hypothesis gained renewed attention and acceptance following Fodor’s 1975 book The Language of Thought. The concept is a cornerstone of the computational theory of mind, and suggests that there may be more parallels than previously believed between human and machine cognition.