Writer for the Chicago Sun-Times turns out a list of books to read this summer that includes books that don’t exist.
He claims that he used AI to generate the list, then didn’t check it for content,
REALLY? You used AI to generate a reading list? That’s grounds for dismissal right there, in my book. The crime isn’t that you didn’t check t for accuracy. Or whether the books were real or not. Apparently, if he had simply tossed out the clinkers, it would have been a perfectly acceptable way to proceed.
AI isn’t Skynet. AI isn’t gravity. It’s not up to AI whether we accept it. It’s up to us as a society whether we accept it.
How we respond to people who use it, and especially to companies who use it, will determine its effect on our lives.
My vote is to trash its ChatGPT outputs as pernicious garbage, and to name and shame people like Marco Buscaglia (creator of this list) who profit off a predictive text scam like this. We need to make it clear that if we want to read AI slop, we’ll type the prompt into ChatGPT ourselves; we aren’t going to pay some idiot to generate it for us.
You want to be paid for your writing, fucking write.
The text of that review list is also pernicious slop. From the first sentence it’s clearly in that jovial, soulless AI style.
This makes me realize how little I know about AI and how it works. It’s certainly disturbing that this list went to print without any fact-checking, but I understand how that physically happens. But if I were to ask AI to give me a list of summer reading material, I would assume that it would generate a list by pulling content from sources on the Internet. I had thought that if AI gave me some factually incorrect information, it was because that factually incorrect information was out on the web somewhere. But it sure looks like AI was straight up inventing this information. Maybe if the “author” had told AI to provide sources, the list would have been more accurate.
“AI” is not really AI, but a Large Language Model, something that spits out not “true” or even “reasoned” facts, but “plausible” ones, if the list looks like something you would read in an article recommending books, then it is enough for the LLM.
I often think about how the worst 1% of people working in a particular field, are working in that particular field. Kind of like the old joke, “What do you call a med student that graduates at the bottom of their class? Doctor.”
It helps me feel better about the average person in that field.
Oh dear. I don’t want to link to it because I don’t want to be part of an Internet horde chasing him down, but I Googled him. He’s got a garrulous wry website where he talks about himself a lot. One of the pages is “recent stories,” which hasn’t been updated in a few years, except for one small section: an automated section that links to stories about him.
He might want to consider taking that widget off his page.
Huh. I also checked out his Facebook profile, where he makes a lengthy statement taking complete responsibility for this screwup and acknowledging that it might be career-ending for him. That acknowledgment actually makes me want it not to end his career. If he can learn from it and stay the hell away from generative AI, he might be a perfectly fine journalist.
If he insists on continuing to use generative AI for background, though? I’d rather he find something else to do.
The Gryphon Riders of Ruunsparch by R. Trinon-smythe
The New New Me Therapy by someone called “Eliza”. She’s really easy to understand, if a bit predictive.
The Currants of Space by I. Sacasimoff, a catalog of all the 100K known jams, jellies and preserves of known space