…the question you asked was “Is language fundamental to human cognition?”
I asked the identical question to ChatGPT, I got back almost an identical word-salad answer, just different bullet points saying the same thing. I then asked it to clarify:
It was a yes or no question. It didn’t need a lengthy, bullshit answer that didn’t actually answer the question.
Wow! You asked an AI the same question that I did, and got back basically the same answer, but using different words?
The thing is obviously a fake!
And if you believe its response here was “word salad”, I have to conclude that one of two things must be true: you are either (a) a brilliant theoretician in the field of linguistic intelligence who is able to expose basic flaws in the response, or (b) not bright enough to even understand it. If (a) is true, I look forward to your own comprehensive response to the question, complete with detailed annotations about the factual errors in the original.
Until then, I see all the evidence pointing to option (b).
Indeed it was. You got a “yes”. That you didn’t even understand this confirms my opinion about option (b).
They do all the time. It was things like whether an AI can play chess at better than the level of a typical 10-year-old. Then that goalpost moved to whether an AI can beat a chess grandmaster. Then the whole chess thing was abandoned as not a “real” indicator of intelligence after all, and the idea of real-world understanding was dragged in, culminating in challenges like the Winograd schemas and real-world human competency exams. GPT-4 recently scored 94.4% on the Winograd Schema challenge. And scored far better than most humans on many tests of general knowledge and problem-solving skills. This is why the skeptics had to mount the moving goalposts on motor-powered wheeled carts to keep them moving fast enough.
ChatGPT didn’t answer the question. I had to ask a follow-up question to get a simple yes or no. That was the entirety of my point. The answer it gave you was ridiculously verbose and not helpful to the very general audience you say it was aimed at.
If you don’t need the bickering, maybe try not being a prick to people who are just trying to have a discussion with you. As someone on the outside, it looks like you are treating any criticism of AI as an attack on you.
Bickering aside, I’ve found the discussion helpful, even if a lot it involves protests over half full/half empty glasses.
We could probably turn it down a notch. This is a new technology with an uncertain future, and mammalian brains resist uncertainty. We much prefer inevitable good or even inevitable harm. (I prefer jokes, another coping mechanism.)
None of the participants here, including Whack-a-Mole, are generally considered problem posters. Yet quite a few seem to have triggered strong responses. Again, I think this reflects the underlying uncertainty of this technology.
I’m unaware of any subjects where he doesn’t act like a prick and a blowhard to people who disagree with him, but who knows? Maybe they exist.
That said, ChatGPT unambiguously answered the question in his sample, multiple times, including in the phrase, “While language is fundamental to human cognition.” As many criticisms as I have of his fanboiing of LLMs, I don’t think this is an accurate one.
I admit that I’ve been sensitized to criticisms of AI that I’ve considered unjustified – and over the years, many have provably been unjustified or sometimes entirely idiotic – and have probably overreacted.
And a blessed and happy new year to you, too.
Hark! A rational poster! What are you doing here, wallowing in the Pit?
As a general matter since I was a CS undergrad 45 years (!) ago “true artificial intelligence” is defined as “Whatever a computer can’t quite do. Yet.”
@wolfpup is historically accurate about moving goal posts. And that the pace of movement has always been proportional to the pace of progress. Which had been glacial on and off for much of those 40 years, but has been at high speed recently, and threatens to accelerate continuously from here. Whether or not Kurzweil’s Singularity ever comes to pass, we’re in for quite an intellectually jarring ride.
Whether any particular poster here has been moving their goalposts during the discussion is not a topic I care to explore.
It’s like what Hemingway sid when asked how people go bankrupt: “Very slowly for a long time, then all at once.”
He was describing a tipping point, or emergence. And that’s what happened. We’ve had the basic science of computer neural nets for a long time, but progress was agonizingly slow, with lots of blind alleys, because we were missing rhe key: Scale. The enabling technologies of AI were high bandwidth graphics cards and the internet enbling the collection of massive amounts of data. Once we had those, the ‘slowly, over time’ then became ‘all at once’.
I think this is partly what’s holding back some people from accepting that there is something very different going on here now. People who spent their careers in the ‘very slowly’ stage are more likely to be in denial over what seems to be emerging, in my opinion. That includes not just computer scientists but philosophers, linguists, and others invested in the debate over intelligence.
This has been a really interesting thread. Stepping back to the question of consequences, though: when would you stop trusting stochastic answers from an LLM?
Like Stranger, I sometimes review technical papers. Sometimes they make claims I can cross-check with my existing independent tools or knowledge, and sometimes I have to dig to make an appropriate assessment. But rigorous review is almost never easy and I’ve seen plenty of crap just get passed along because it sounds authoritative. In some fields, maybe that’s not a big deal: a bullshit paper gets published; time and money is wasted but nobody dies. Sometimes rigor matters, though.
So much of what we do online is low-consequence stuff, and ChatGPT is probably as good as any human at parsing and even conversing on that kind of field. In fact, one of the big questions about AI, to my mind, is not whether it emulates human intelligence but whether some common forms of human response are essentially the same kind of stochastic gibberish. Poetry interpretation, essentially. It’s not that there’s nothing of value in that, but it’s idiosyncratic, almost limbic. I wouldn’t build my house on it.
When rigor matters, humans can step back from low-stakes banter and examine the framework of an argument or claim. Even, famously, to the point of determining that a given framework is incomplete or a given claim undecidable. Maybe only a few people have the drive or ability to do that, but it’s an essential feature of building a reliable foundation of knowledge.
LLMs are not AI, not really. But computer progress has grown exponentially, roughly in line with Moore’s Law. That means that progress in AI has been minuscule for years in an absolute sense, papering over the massive progress percentage wise. Kevin Drum:
So yeah, ChatGPT-4 is a joke. Joke, joke, joke. Then Kaboom. Then boom-boom-boom as computers surpass humans.
I don’t know whether we’re at the housecat stage yet: Kevin Drum is making a mathematical point, not an empirical one. But my spidersense is tingling, and methinks we need to get in a habit of prudently using this technology, rather than ruling it out artificially.
I know we’re both being hyperbolic, but if that was submitted as a dissertation, or even a paper, in grad school, it’d be thrown out.
But I was that kind of precocious SF-reading kid who would’ve have produced that at about age 14-15, since I was so full of myself and my perceived intellectual prowess.
(And while I was genuinely good at bullshitting (see my user name), and that has served me well throughout my life, the rest is sorely lacking. I barely passed physics, biology and chemistry and got a B in math (from HS). I aced the social sciences and the third and fourth language.)
It was a small insult, barely noticeable, so a small excuse is perfectly cromulent and and accepted!
Yeah, we may be getting very close to actual scary AI. But we aren’t there, yet. We are still at “faking it”. And to be clear, there have always been two pathways.
Stuff computers do better than people, in ways unlike how humans do it
Stuff that aims to fake looking human
When I was a kid, calculators (1) and Eliza (2) came out. Also natural language games (2) like Zork, which blew us all away.
Right now, the chatbots are (2), which means they can give plausible-sounding but grossly inaccurate replies. This is why i don’t think they should be used in FQ. There are ALSO programs that are really really good at answering questions in a narrow range of expertise. They are (for example) probably already better than most human doctors at diagnosis, if i had to guess. But that’s not the product that’s available free right now.