As far as AI, is it possible that intelligence is actually much more simple than we originally thought

Well, one way to make the case is that I remember ChatGPT 3.5 having this problem, and now, with no fanfare or indication of any major upgrade, it no longer does – at least, it can count the "r"s in “strawberry” correctly. IOW, it’s a trivial problem that stems from an implementation artifact (probably just the way the word was tokenized) and has absolutely no bearing on GPT’s impressive core capabilities, many of which exceed the problem-solving and analogizing capabilities of most humans.

Which bring us back to the argument we were having. I’m sorry you feel I’m “accusing” you unfairly, and maybe the problem is that we’re just talking past each other. I refer to this latest quote in particular:

We obviously have quite different interpretations of what “human level intelligence” means.

It’s pretty clear to me what the OP was asking about, but let’s not make this argument about micro-analyzing the nuances of the OP’s words; this is not a Papal proclamation. The general thrust of this thread and the reasonable question here is whether or not contemporary AI has achieved “human level intelligence”, which to me very obviously refers to a human level of intelligence in solving a specific class of problem, and nothing at all to do with AGI.

Maybe I’m being overly cynical here, but you have in the past been exceptionally dismissive of AI, starting a whole thread about how ChatGPT doesn’t really “understand” anything, and IIRC being equally dismissive of the whole idea of computational intelligence. Computational intelligence is not just the foundation of AI, but an important part (though only a part) of the model of human cognition put forward over past decades by many highly respected researchers working at the intersection of cognitive science and AI, like Jerry Fodor, Hilary Putnam, David Marr, and many others.

So I saw the claim that GPT can’t be considered intelligent even if it can solve problems in logic that would baffle most humans, because it also fails to do trivially simple tasks, as more of the same dismissiveness.

My argument against it is the same basic one put forward by Marvin Minsky 60 years ago, and by Alan Turing long before that: “If it acts intelligent, then it is intelligent” – at least, in that particular domain. Minsky’s particular frustration was with those who “looked under the covers”, thought that they more or less grasped the general principle of how a particular AI works, and declared that it was “just a trick”. As Minsky said, “when you explain, you explain away”. The same thing is happening now with GPT, the skeptics apparently undaunted by the fact that the researchers and developers themselves don’t really understand how novel emergent properties suddenly appear at certain levels of scale.