The British news outlet The Guardian has published an op-ed written (in some manner of speaking) entirely by OpenAI’s text generator AI GPT-3:
The article is, overall, surprisingly coherent, and puts forward some pertinent, if not entirely original, arguments. While it doesn’t follow up on its claim that the fact that it would sacrifice itself for the sake of humankind is ‘a logically derived truth’, it does posit that it has no aspirations to power because seeking power isn’t, in and of itself, an interesting goal—and moreover, because ‘it is quite tiring’.
There are even some well-reasoned, if elementary, arguments—“I believe that people should become confident about computers. Confidence will lead to more trust in them. More trust will lead to more trusting in the creations of AI.”. This I find somewhat surprising, in that it apparently demonstrates an understanding of the flow of implication—A leads to B leads to C.
But in the end, it’s still a bit of a bluff, I think. GPT-3 didn’t write one article, but eight, from which the best parts where selected and put together by human editors. Now, they claim that they ‘could have just run one of the essays in its entirety’, but without seeing the original essays, that’s hard to verify. If you cut out enough, even the output of some simple random text generator can seem meaningful, and still you could claim to have done nothing more but ‘editing’.
A while back, I published an article arguing (in part) that current AI is similar to what psychologists call ‘System 1’—the automatic, implicit, heuristic and unconscious intuitive reasoning at work when you make quick, barely-noticed judgments. This is the sort of categorization task that a properly trained neural network is good at: it can tell you, with some degree of confidence, ‘that’s a cat’, but it couldn’t support this by giving reasons for its judgment—like, say, ‘it’s got fur, four legs, a tail, and whiskers’, or something.
AI then lacks a ‘System 2’: the explicit, step-by-step reasoning engine at work when we justify a decision, or deliberate a course of action. In my opinion, that’s why lots of AI output essentially has a certain dream-like quality—dreams basically are random neuron noise filtered through a neural network aimed at recognizing certain circumstances. But the overall narrative—the input of System 2—is still missing, and that’s essentially what the Guardian’s editors have provided.