I don’t see any “contempt” in anything I said. The counterargument to the belief that human intelligence is unique is the vast set of previously thought unreachable benchmarks of intelligence that AI has already achieved. It wasn’t all that long ago that the philosopher Hubert Dreyfus confidently declared that no computer would ever be able to play better than a child’s level of chess. Now that a computer can play at a grandmaster level, it’s no longer considered a valid benchmark, and with the emergence of conversational LLMs, the vaunted Turing test is no longer considered interesting. The fact that the measure of intelligence has to be a constantly moving target or the argument against AI falls apart should be evidence enough of the argument’s basic fallacy.
I would kindly ask that you stop making ridiculous accusations like this that are needlessly provocative and completely untrue. If you think I’ve “attacked” anyone, report it.
In fact, that is exactly what you did (see marked up quote below), in addition to the use of pejorative language, and in now in this post introducing an argument referencing (Hugh Dreyfus) that no here one has advocated just so you can undercut it to ‘prove’ your point. It is a disingenuous way of ‘winning the argument’ instead of engaging in the discussion of the o.p. or providing a counterfactual to @Alessan’s observation…which may or may not be true; as it was presented as a personal anecdote, it would seem easy to provide a list of AI proponents who do not express “bone-deep contempt toward human intelligence and humanity in general”, and in fact I could come up with three recognizable names offhand who are leading names in artificial intelligence research who are definitely not contemptuous of humanity or human intelligence.
In terms of the OP, I agree in some ways and disagree in others; I think it’s nuanced.
I think it can be misleading looking at this purely in terms of time spent, versus effort.
We didn’t begin with the belief that deconstructing or replicating intelligence would be difficult; we came to that belief after hundreds of the worlds smartest people attempted to make machines that could do tasks that we consider trivial. (And with the implicit assumption as mentioned, that tasks that most humans can do must be “easy” and tasks which only a subset of humans can do, like chess, must be “hard”)
The progress of deep learning has been fantastic, and I’m expecting the astonishing speed of progress to continue for the foreseeable future (I think there’s a temptation for some to be dismissive of it because, frankly, it’s already a bit scary what AI can do and the potential effect on society).
But I think it does still have a considerable way to go for AGI, and we can’t get even reverse engineer the systems we’ve made up to now.
Human cognition has been estimated as operating at about 10bit/s. Assuming 16 hours of wakefulness a day and eighty years of life, that’s about 2 gigabytes of data. Now, I’m not going to claim that that’s all the data we have access to, but it seems clear that the human brain does much more with much less (also energy-wise) than current AI does. Just reading all of the training data of GPT-3 would take the average human reader around 80,000 years.
So perhaps what current AI is telling us is that the measure of intelligence shouldn’t be simple performance, but rather the meta-ability to acquire new skills—where it seems we humans still have a considerable edge.
Well, in a certain sense, it’s clear that AI can’t possibly create new ideas: as Ted Chiang points out, any AI is a deterministic function of its input, so can’t create any novel information. The question then becomes if—and in what sense—we humans can, of course.
The EpochAI site hasn’t been updated yet with the latest results. But they did a small announcement here:
OpenAI (the makers of the model) don’t have any direct association with EpochAI (the makers of the test set) and definitely don’t have access to the problems.
The performance on the FrontierMath test is definitely significant progress. Earlier this year, Terence Tao claimed that the problems on there would elude AI ‘for several years at least’.
I don’t want to belabor this tired point much further and a mod may soon cut off this nonsense, but did you report this alleged “attack” on another poster? If it’s not reportable, it wasn’t an “attack” and you’re just putting your own unwarranted interpretation on innocent words, notwithstanding your copious use of yellow highlighting. Saying that an argument is “bizarre” is an expression of strong disagreement, and saying that somebody is “probably misinterpreting” evidence is in no way confrontational. Saying that this position on human intelligence “may upset some humanists” is a perfectly neutral statement.
I thought the counterargument you were looking for was that it’s not a sacrilege to consider that human intelligence isn’t unique and can be replicated by sufficiently advanced AI. The point of the Dreyfus anecdote was to highlight what a moving target the concept of “intelligence” is. To the detractors of AI, intelligence is simply defined as anything that AI hasn’t achieved yet.
But it appears you were asking for counterexamples to the assertion that AI researchers are misanthropic anti-humanists. Which I can easily address with examples.
I’ve known many AI researchers over the years, and none of them fit that stereotype. They tend to be like most of their fellow research scientists, liberal academics. The late Marvin Minsky, for instance, considered one of the founders of the field of AI, was practically the poster child for a humanist, though he could have strong views on the technical issues in AI. An accomplished pianist, Minsky believed that while humans were certainly machines, they were machines whose intelligence emerged in a fundamentally different way than in any AI systems known at the time, which is still mostly true today. Another great pioneer in AI, the great John McCarthy (affectionately called “Uncle John” by his students) strongly believed in the use of technology to advance material progress for humanity.
I mean, isn’t that exactly what you’d expect, though? Whenever we see AI achieve some new capability, while still failing at many tasks that are easy for humans, we learn that this capability apparently doesn’t take human-level intelligence. So of course there is a sliding scale regarding what out best guess for an ‘AGI-complete’ problem is.
I don’t know whether the phrase “we learn” is supposed to be sarcastic, but that’s how I’d interpret it – that as soon as some new threshold of intelligence is reached, skeptics of AI label it as “obviously not real intelligence” and move the goalposts once again.
The fact that AI systems fail at some tasks that are trivially easy for humans is explained by the fact that the underpinnings of that intelligence – how it was achieved and the knowledge base that it draws from – is very different from human intelligence, but not necessarily inferior. When an AI solves a problem in logic that has me stumped, at least temporarily, then it’s obviously exhibiting traits that are useful and important. When it fails at something that even a small child would know, I don’t believe it’s due to anything intrinsic in the model, but only due to the incompleteness of its world view.
Chess is one of the intelligence challenges that the skeptics have had to give up on. Hubert Dreyfus’s claim that no computer could ever play better than a ten-year-old child was shattered to his everlasting embarrassment way back in 1967:
In 1967, several MIT students and professors (organized by Seymour Papert) challenged Dr. Hubert Dreyfus to play a game of chess against Mac Hack VI. Dreyfus, a professor of philosophy at MIT, wrote the book What Computers Can’t Do, questioning the computer’s ability to serve as a model for the human brain. He also asserted that no computer program could defeat even a 10-year-old child at chess. Dreyfus accepted the challenge. Herbert A. Simon, an artificial intelligence pioneer, watched the game. He said, “it was a wonderful game—a real cliffhanger between two woodpushers with bursts of insights and fiendish plans … great moments of drama and disaster that go in such games.” The computer was beating Dreyfus when he found a move which could have captured the enemy queen. The only way the computer could get out of this was to keep Dreyfus in checks with its own queen until it could fork the queen and king, and then exchange them. That is what the computer did. Soon, Dreyfus was losing. Finally, the computer checkmated Dreyfus in the middle of the board.
Today with the emergence of impressive LLMs, the Turing test no longer seems interesting to the skeptics, and the ever-moving target for intelligent behaviour moves on, even though the Turing test seemed like an ingenious way to sweep aside all the technicalities and simply have humans accurately assess whether the entity they were conversing with was intelligent (i.e.- a human) or not. More than half a dozen alternatives, like the Winograd schema challenge, have been proposed to replace it.
Interestingly, GPT 4 and earlier versions have been put to the Turing test, and all have been successful. Of particular interest is how the personality traits compared with humans; earlier versions tended to be more hostile, and later versions more altruistic.
There was nothing sarcastic about my post. The point is simple: we expect certain tasks to require human-level intelligence, and are proven wrong when it’s performed by a system that is obviously far from human-level intelligence.
The word “obviously” is doing some really heavy lifting here. So your argument is that when the AI community is challenged to produce an AI that can perform a certain intellectual task, and it does it, the counterargument is that it fails to also do some other completely unrelated tasks. Yes, that can be used as an argument against AGI, but not against the original proposition.
To put it another way, when I make a bet, the terms of the bet are clearly established beforehand. If the terms of the bet are changed after the fact because the loser didn’t like the outcome, I would definitely question the sincerity (and integrity) of that individual.
You seem to be thinking that I aim to defend Dreyfus in some way. I’m not; the prevailing opinion has long been that Dreyfus was essentially right about many of his criticisms of the kind of AI that was prevalent at the time he made them, but was wrong about certain others.
My point is solely that when we talk about ‘real intelligence’ little besides human intelligence can be meant, as we have no other yardstick to go by; hence, whenever we think we have something that takes ‘real intelligence’ to accomplish, and then it’s accomplished by some system that is clearly not human-level intelligence, we learn something new and valuable: that the proposed task didn’t take that kind of intelligence after all.
If it’s so obvious, then why wasn’t the obvious bit part of the original criteria? The whole point of the moving goalposts is that it isn’t obvious that current AIs are far from human-level intelligence.
I mean, I think it’s uncontroversial that no publicly available AI is at human level yet. As for the proprietary models, I can’t offer anything beyond the verdict of those having tested them, e.g. Francois Chollet who originated the ARC-AGI test:
Passing ARC-AGI does not equate to achieving AGI, and, as a matter of fact, I don’t think o3 is AGI yet. o3 still fails on some very easy tasks, indicating fundamental differences with human intelligence.
Count me among those that would not consider it appropriate to define “real” intelligence as the set of things that humans can do that AI cannot.
Firstly because it’s a moving target, of course.
And secondly because one day we may ultimately get to the point where the set of abilities of an AI far outstrips that of “real intelligence”, yet is still not “real” because it can’t judge a breakdance competition or whatever.
I think it’s better to define intelligence in terms of some kind of concrete, useful goals. But I know it’s non trivial to formulate such goals.
Ultimately, computers are tools, designed to do a job. If they do the job well, then they’re good computers; if you later find out that they can also do something else, even better. Whether or not they’re “intelligent” is hardly relevant.
Why not? What specific capability is it that humans have that AIs lack? And if AIs someday achieve that capability, will you then agree that they are at human level? If not, then there’s obviously some other capability that’s relevant: What is that other capability?
And if you can’t name those specific capabilities, then they aren’t so obvious, are they?
That’s a relatively small portion, though–more at the normal human scale, as compared to the petabytes upon petabytes of unsupervised learning that’s generally used as a way to differentiate ML from human learning. I suspect that not nearly as much data would be needed if corrections could be made as the system is learning as opposed to a separate step afterwards. It could be corrected as soon as a mistake is made as opposed to depending on the law of averages across the training set.
I largely agree; my comment was mainly directed at those that think there is something critically wrong with the fact that current ML training systems need so much data. It’s true that humans can learn to read with much less training data than an LLM. But I don’t think that’s unexpected given the constraints. And the advantage of so much data, as you note, is that the LLM can end up with much more detailed knowledge than a human that didn’t go through so much training.