It’s just pointing out they don’t fit the same idea that “AI” does when used by the general populace. They assume it means a human-level of intelligence.
It makes far more sense to use terms like LLM, which acknowledge the limitations of the current approach.
I can’t say that human intelligence isn’t pattern based. Much of it is for sure, and more may also be so. But I am sure that our brains do not work like an LLM. You can show we have biological impulses, and that we learn with relatively small inputs of language–unlike LLMs that need more than any human could ever even read.
Every industry has terms of art that aren’t understood by the general public. Their misunderstanding doesn’t make the terms of art wrong. AI has been used in video games since the 1950s, for instance. If the general public thinks that “AI” has to mean frikkin C-3PO, then that mistake is on them.
Put it another way–insisting that “artificial intelligence” is misnamed because it isn’t actually intelligence is a little like insisting that RAM is misnamed because it isn’t actually a large-horned goat.
You can also show that humans learn language using far more than any computer has ever been given. Humans learn language with a far smaller volume of raw language, but a far greater amount of individual feedback, as well as multisensory input. I don’t think anyone has ever tried giving a computer that level of feedback, because it would be as much work as raising a child.
Well, yeah, But that’s not the usage being discussed here. People were not using “AI” to mean what it does in the field of computer science. It was used as a popular term for a Large Language Model.
So we weren’t discussing the term of art, but the popular marketing term. And, as a marketing term, its usage has caused confusion. You are far from the only person I’ve seen propose that it might actually model how humans think. And don’t forget all of the businesses who think they can just replace people with “AI.”
I myself try to be more specific when referring to LLMs, and not use the generic term AI. If I do use the term AI in other contexts, I’m careful to use it as an adjective, e.g. AI image generation, AI text generation, AI image scaling.
I think the specificity helps the public understand it better. And, as a bonus, it’s not inconsistent with the term of art, either.
As far as I know, no one was mixing up RAM with male sheep. So that’s not a problem.
Oh, definitely. Not as much language input, but definitely other types of input that an LLM does not receive.
And I suspect the reason no one has given child-level input and effort to an AI is that they are still trying out lesser levels of input to achieve lesser levels of intelligence. No point in trying something they have no reason to think could yet work.
But when they do think it will work? I very much think there are people who would be willing to raise an AI like a child.
Yes, they were. LLMs fit very squarely in the computer science definition of AI. Here it comes straight from IBM, kind of an expert in computer science:
LLMs have become a household name thanks to the role they have played in bringing generative AI to the forefront of the public interest, as well as the point on which organizations are focusing to adopt artificial intelligence across numerous business functions and use cases.
Right. It annoys me when people treat AI as synonymous with LLMs, largely because LLMs have had a great deal of success lately. There are many different kinds of AI. But LLMs are definitely one kind of AI.