I thought thi recent Nature News article might help put some context to the discussion. The subject is science, not journalism, but the discussion points still apply, I think.
What ChatGPT and generative AI mean for science (nature.com)
Some quotes from the article:
“I use LLMs every day now,” says Hafsteinn Einarsson, a computer scientist at the University of Iceland in Reykjavik. He started with GPT-3, but has since switched to ChatGPT, which helps him to write presentation slides, student exams and coursework problems, and to convert student theses into papers. “Many people are using it as a digital secretary or assistant,” he says.
But researchers emphasize that LLMs are fundamentally unreliable at answering questions, sometimes generating false responses. “We need to be wary when we use these systems to produce knowledge,” says Osmanovic Thunström.
It doesn’t look like this latest wave of tools has made any earthshaking advances in known areas of AI issues, such as bias, brittleness, and training (the real elephant in the room IMHO).