The next page in the book of AI evolution is here, powered by GPT 3.5, and I am very, nay, extremely impressed

This seems like an appropriate opportunity to quote the great Marvin Minsky: “When you explain, you explain away”. This was a theme that he re-iterated often in the context of AI, that when you explain the internal workings of some AI engine, it appears to become revealed as merely a “mechanistic trick” and not “real intelligence”.

As Alan Turing recognized long ago, this is completely the wrong evaluation criterion, and it’s exactly what you’re doing. The rote mechanistic functioning you describe makes it sound like a more sophisticated version of Eliza. For purposes of this argument, the internals of ChatGPT must be regarded as completely irrelevant. The only pertinent question here is whether its responses are consistent with the inputs of the user in a manner that is sufficiently comprehensive and general across a wide range of subject matter that it can be said to have “understood” the user input. It is absolutely futile to try to philosophize about what “understanding” really means in some absolute or abstract sense; one must – and can only – judge by the results.

So when I asked it for recipe advice recently, and it came up with some good relevant suggestions, and when I asked about some alternate ingredients, it offered various degrees of approval, disapproval, cautions, and tips. It retains context so I can ask it followup questions and it knows what I’m talking about. Is that “understanding”? It sure gets my vote. The real point here being how general it is – it’s not subject-limited.

Now let’s go to @Dr.Strangelove’s conversation with ChatGPT about “a blue dog jumping in pink grass.” Just for fun, I followed up with it, and actually learned a few things. Although it’s unusual, dogs actually can appear somewhat blue, like the Weimaraner, Kerry Blue Terrier, and others. And some grasses can be pink or pinkish.

We concluded thusly:

So if I asked you to imagine a blue dog jumping through pink grass, this could actually be a realistic picture, right?

Yes, it is possible to imagine a blue dog jumping through pink grass, especially if you are imagining a specific breed that has a blue coat color, like a Weimaraner, and a type of grass that is pink, like pink muhly grass. While it may not be a common or natural occurrence, it is possible to create a picture or scene that includes these elements. Ultimately, it depends on your creativity and imagination, and whether you want to create a realistic or imaginative picture.

The above interaction was mostly just for fun, but the key question is, does it have sufficient “understanding” to be able to “imagine” and thereby create a picture of that scene, given a picture-creation subsystem? I don’t see how you can deny it, and that was the entirely of your argument. It certainly seems to know what a dog is, and what grass is, at least well enough to fetch the appropriate images from a clip-art library and assembly them in an appropriate setting. Where does “token-matching” end and “true understanding” begin?

The above evidence seems to belie that claim. Again, remember Minsky’s stricture about “explanations” versus observed performance.

I’ll just say this, and get it off my chest. I’m not qualified to defend Fodor against his critics (nor is this the venue to do it) but (per Wikipedia) “[Fodor’s] writings in these fields laid the groundwork for the modularity of mind and the language of thought hypotheses, and he is recognized as having had ‘an enormous influence on virtually every portion of the philosophy of mind literature since 1960’.

He was a towering figure in cognitive science who I daresay greatly overshadowed his various critics.