Well, one can also use language models to massage non-native or poor English. I tested it and it seems to basically work. Not that I would blindly trust the results without proofreading.
That would be an appropriate use of LLMs…provided that you could trust it to make accurate improvements while maintaining the essential information in the paper. I’m dubious that any extant LLM could do that beyond just massaging the grammar, and even that would require thorough proofreading.
Stranger
My Facebook feed has been getting flooded with AI-generated images, mainly birds. Can’t get much more obvious than this:
Several perpetrators list a street address on Archer Ave in San Luis Obispo that either does not exist or, rarely, does exist but is occupied by something completely different. I’ve been reporting their pages to Facebook as fake but I’m guessing they won’t do anything about it.
After ChatGPT debuted in late 2022 and wowed users with its humanlike fluency, many academic journals rolled out policies requiring authors to disclose whether they had used artificial intelligence (AI) to help write their papers. But new evidence from one publisher suggests four times as many authors use AI as admit to it—and that peer reviewers are using it, too, even though they are asked not to.
The new study, run by the American Association for Cancer Research (AACR), investigated the 10 journals the society publishes. AACR launched it after some authors questioned whether the peer-review reports on papers they had submitted were AI-generated, says Daniel Evanko, who oversees AACR’s editorial systems. It made use of a recently developed AI detector the AACR team and others say appears to be highly accurate.
…
After ChatGPT arrived, the detector showed, AI-generated text steadily became more common in AACR papers’ abstracts, methods sections, and peer-review reports. (Evanko’s study only covered those kinds of texts because AACR’s database includes them in a format that is readily analyzable.) In addition to the high proportion of abstracts with AI-generated text, Evanko’s team found it in nearly 15% of the methods sections and 7% of reviewer reports in the last quarter of 2024.
He speculates authors are not disclosing AI use because they fear journals will reject their manuscript, even though using AI for editing manuscripts and other purposes can be valid. The International Association of Scientific, Technical & Medical Publishers reported in April that many authors are confused about when they should report AI use; the group, known as STM, proposed guidance updating a version it offered in 2023 and expects to finalize it next week.
Stranger