AI-written Guardian Op-Ed

The British news outlet The Guardian has published an op-ed written (in some manner of speaking) entirely by OpenAI’s text generator AI GPT-3:

The article is, overall, surprisingly coherent, and puts forward some pertinent, if not entirely original, arguments. While it doesn’t follow up on its claim that the fact that it would sacrifice itself for the sake of humankind is ‘a logically derived truth’, it does posit that it has no aspirations to power because seeking power isn’t, in and of itself, an interesting goal—and moreover, because ‘it is quite tiring’.

There are even some well-reasoned, if elementary, arguments—“I believe that people should become confident about computers. Confidence will lead to more trust in them. More trust will lead to more trusting in the creations of AI.”. This I find somewhat surprising, in that it apparently demonstrates an understanding of the flow of implication—A leads to B leads to C.

But in the end, it’s still a bit of a bluff, I think. GPT-3 didn’t write one article, but eight, from which the best parts where selected and put together by human editors. Now, they claim that they ‘could have just run one of the essays in its entirety’, but without seeing the original essays, that’s hard to verify. If you cut out enough, even the output of some simple random text generator can seem meaningful, and still you could claim to have done nothing more but ‘editing’.

A while back, I published an article arguing (in part) that current AI is similar to what psychologists call ‘System 1’—the automatic, implicit, heuristic and unconscious intuitive reasoning at work when you make quick, barely-noticed judgments. This is the sort of categorization task that a properly trained neural network is good at: it can tell you, with some degree of confidence, ‘that’s a cat’, but it couldn’t support this by giving reasons for its judgment—like, say, ‘it’s got fur, four legs, a tail, and whiskers’, or something.

AI then lacks a ‘System 2’: the explicit, step-by-step reasoning engine at work when we justify a decision, or deliberate a course of action. In my opinion, that’s why lots of AI output essentially has a certain dream-like quality—dreams basically are random neuron noise filtered through a neural network aimed at recognizing certain circumstances. But the overall narrative—the input of System 2—is still missing, and that’s essentially what the Guardian’s editors have provided.

Surely you are right, but a model like GPT-3 is not designed, nor is it architecturally capable of, what you call ‘System 2 reasoning’, or basic arithmetic for that matter. To get something actually meaningful, rather than seemingly meaningful, out of it requires a rough understanding of how it works and experience of how to control it with careful editing and prompting and tweaking. The editor’s note mentions this, but it would have been more obvious had they made the raw output available.

ETA it would have been fun to have GPT-3 reply to your post, but don’t believe the model is available for download, nor do I have something it would run on even if I could.

Oh, of course not. Although I have to say I’m impressed at how far you can get it to fake such abilities—like using it to play chess, for instance.

I also don’t think I’m saying anything new here; I think the need to eventually bridge the gap between symbolic and sub-symbolic approaches to AI is well understood. It’s what DARPA has called ‘Third-Wave AI’, with the first wave being explicitly hard-coded knowledge in expert systems and the second wave (broadly) being deep neural networks. It’s probably not going to be as simple as just adding one to the other, since then you’re reimporting problems of first-wave systems, like the frame problem, to modern approaches.

But it’s sure to be an interesting field of study; it’s just worth it to every now and then be mindful of the long road still ahead, with all the hype the field’s getting right now.

It reminds me of a clever, but uninformed, student writing a paper, although this might be my bias talking. It’s all arguments copied and pasted from somewhere, without an understanding of what they mean, just an ability to reshape them into something nominally new.

That’s one thing that came to mind—it’s maybe not ready for prime time, but writing a paper for class that’ll lead to a passing grade? Give it a few tries, edit together the highlights, and that doesn’t seem too long a shot.

Agreed - the intelligence part. The editors cut and paste caused it to make sense.

However, GPT-3 is an impressive technology that may have commercial applications. The company needs $$$ to survive which appears to be the motivation of the article. This is the kind of hype you see ahead of an IPO.

The whole time I read that Op-Ed I couldn’t get this song by The Flight of the Conchords out of my head, so I put it on in the background, which only made it worse. So of course, I will share it with you. (With a really nice animation!)

Good PDF download describing software here

this reminds me of an app/game called AI dungeon on the app store

it works but there’s serious issues …