As far as AI, is it possible that intelligence is actually much more simple than we originally thought

There’s really no ‘feeling’ about it; calling my words a ‘disingenuous distortion’ is an accusation. The question is whether it’s accurate, not whether it’s accusatory.

Still it’s generally considered good form to keep a thread on topic. Even if you’re not willing to assent regarding the explicit formulation ‘human level intelligence’, the OP also asked about AI’s IQ, which is a measure intended to quantify intelligence in the form of the g-factor, which takes its name exactly from the word ‘general’.

Furthermore, there seems to be little of interest in asking whether machines have achieved or surpassed human equivalence in specific circumscribed tasks: obviously they have, centuries ago (indeed, possibly millennia ago, if you’re willing to count the abacus or Antikythera mechanism or what have you). That’s after all why we build them: because they can augment specifically circumscribed human capabilities, like e.g. arithmetic, beyond what would be possible for the individual human. If current AI were perceived as just more of the same, there would hardly be any discussion. But the promise—and threat—of it as these discussions go is the whole-sale replacement of human labor, for which it would first have to be whole-sale equivalent to human performance: i.e. equal to human level intelligence. That, if any, is the interesting question at issue.

I’m not ‘dismissive’ of anything; in fact, in this very thread I’ve admitted my surprise at o3’s performance regarding the FrontierMath-test. I don’t dismiss this in the slightest.

You also seem to be alleging that I oppose computationalist views out of some intrinsic distaste, perhaps caused by misguided belief in human specialness, or ensouledness, or God’s favor, or whatever. But I’ve come to the position I now hold kicking and screaming—I started out believing computationalism to be obvious: here’s me arguing for conscious, creative computers. It’s just that I eventually realized there are huge problems with that position which I found I could not dismiss easily.

So, I am where I am because I have encountered arguments that didn’t seem to leave any different, honest opinion on the table. The thread you link, for instance, discusses a mathematical argument to the effect that the bare structure of language—words (or tokens) and their relations—fails to uniquely specify its intended model, i.e. a mapping of terms to things in the world. For any such model, it’s possibly to construct another one, such that the terms now refer to entirely different things. In other words, there’s no fact of the matter whether an LLM means mouse or house when it uses the word ‘mouse’. This argument isn’t wholly my own, I essentially just applied it to LLMs in my entry into the last essay competition by the Foundational Questions Institute FQxI. The argument was first formulated by Putnam, and its original form goes back to Newman’s objection to Russell’s structural realism.

However, that argument is really just a special case; what made me skeptical of computationalism in the first place is the famous class of triviality arguments, originated, again, by Hilary Putnam, with my own version showcasing an explicit construction that details that computation isn’t some inherent, objective aspect to a system, but a question of how the system is used (by concrete example of using one and the same system, at the same time, to perform different computations in exactly the same manner). Thus, there is no fact of the matter that any system, in and of itself, specifically performs a particular computation, and hence, in particular not the hypothetical computation producing a mind.

Finally, I have tried to meet the explanatory challenges posed by human consciousness, and produced a model that explicitly depends on undecidable propositions, and hence, could not be implemented on any computer, even if the notion of computation were perfectly objective. So I’m brought to skepticism about computationalism not just willy-nilly, but by the combination of (to me, at least) convincing a priori arguments against its possibility, and the a posteriori creation of a model that actually does incorporate non-computable elements. Of course, all of this is provisional: better arguments may yet come up to make this all moot. But until they do, simply dismissing these issues would be intellectually dishonest.

So why are you happily pointing to the authority of Hilary Putnam in defense of computationalism, while ignoring his turn away from the idea? If you’re happy to follow him in (after all, the whole thing was basically his idea), why not continue after him out again?

Even if I agreed to that—and there are trivial objections to it, such as the famous lookup table, and seeming intelligent simply by pure chance—that just bolsters my point: for then if it fails to act intelligent, it also isn’t intelligent.

And yet, Minsky also predicted, in 1970, that “in from three to eight years we will have a machine with the general intelligence of an average human being”. You’re very fond of trotting out Dreyfus’ misses, but for some reason always seem to elide that his opponents were often far more off-base.

That’s really not true. There are two kinds of explanations—debunking and non-debunking. A debunking explanation is something like what happened with the recent spate of unusual drone sightings over New Jersey, which really just turned out to be perfectly usual drones, misidentifications, and a few parents testing out the equipment they got their kids for Christmas. But there are also non-debunking explanations where further knowledge just bolsters our understanding of the original phenomena, e.g. the explanation of thermodynamics in terms of statistical mechanics. The prior terms remain perfectly valid, but are grounded in more fundamental notions. The trouble is just that so far, all explanations of artificial ‘intelligence’ have been debunking ones.