ChatGPT level of confidence

I played with Eliza back in the days of the PDP-10, and I have it even today along with other classic PDP-10 programs like Richard Greenblatt’s MacHack chess program that run on my PDP-10 simulator. I’m familiar with how enamoured Weizenbaum’s secretary was with it, but I don’t know anyone else who thought much of it. It’s just a very silly little LISP program. Now MacHack – that was impressive! That was real AI in its day.

Not the same at all. LLMs successfully solve problems in logic, interpret images, can flawlessly perform language translation, and can answer a series of complex and even profound questions, usually correctly. Lots of examples have been given in other discussions on this board. They genuinely understand what the user is saying or asking, or if you prefer, they definitely act as if they do, and therefore even the current demonstration models of GPT have actual utility.

For example, we got into a discussion in another thread about how logic and intelligence are embedded in the intrinsic structure of human language, and therefore an LLM’s mastery of natural language might help explain some of its remarkable emergent properties characteristic of intelligence. I was uncertain about which theories of linguistics might reflect this view, so I asked ChatGPT. It was a pretty vague question and I doubt Google would have been of much help, but this is precisely the kind of thing that GPT is good at.

What I will concede is that because LLMs are well-spoken and tend to sound authoritative, they give the impression of being infallible experts, and they definitely are not infallible. It’s important to understand that they might be bullshitting, and to verify the information they give. But as I’ve pointed out numerous times, humans are natural bullshitters, too, some more than others. So verifying information with multiple sources is always important anyway.