Well first of all that statement was an oversimplification for sarcastic effect. The actual statement I previously made was:
And no, I did not ask ChatGPT for a summary. But since you mentioned it, now I did!
In addition to providing a good summary of the paper, it also pointed to the following flaws (much of this is my abbreviated re-wording for brevity – it’s not a cut-and-paste but I put it in a quote box anyway to conform with our suggested rule about quoting AI):
Small sample size – just 54 participants, and only 18 in the final crossover session.
The study is a preprint - it has not passed peer review.
EEG does not measure “thinking ability”
The experiment measures task engagement, not long-term decline. The study shows something quite plausible: When AI performs more of the writing task, the brain does less work during that task. That is expected. But the paper is often interpreted as proving long-term cognitive damage, which it does not demonstrate.
Ecological validity problem. In the real world, people will often brainstorm with AI, revise its output, critique and fact-check it, and use it iteratively. These activities could actually increase cognitive engagement, and in fact other research shows that actively modifying AI suggestions improves writing quality, while passive copying does not.
Overall, the paper does not prove:
AI use causes long-term cognitive decline
AI damages the brain
AI reduces intelligence
AI inevitably harms learning
Those stronger claims are extrapolations.
Overall, it rated the paper as a “legitimate but preliminary study showing reduced cognitive engagement during AI-assisted writing tasks. It raises an interesting hypothesis but does not demonstrate long-term cognitive decline or ‘brain damage from AI’.”
No one here mentioned that related to the study, nor did the study have that as the conclusion, so I can only surmise you used that language in your prompt. Good job, you demonstrated that AI can argue against a strawman as well as you.
AI made my life better today - I vibe-coded a VBScript to work around some label software limitations for calculating checksums for a complicated product label involving barcode types that it didn’t natively support. My VB knowledge is rather basic and I could have probably fudged something up by trial and error and perusing some coding forums over a few hours or days. But Gemini whipped up a script in seconds. It still took some trial and error and back and forth troubleshooting with it to get things just right, but that was done over the course of an hour rather than several. I’m happy, the client is happy, and it’s scaleable to other similar products that will be coming down the pipe.
Incidentally, regarding point #5 above, here’s some of that research showing how active engagement with generative AI actually boosts writing quality:
Nice try, but nope, never mentioned those. Here is the full text of my prompt:
Here is a link to a PDF of a fairly lengthy paper suggesting that if people consistently use an AI assistant for essay writing, over time they lose significant cognitive skills. Are you able to access this PDF and comment on the study and in particular on its quality and credibility. If you cannot, I can post an abstract of the paper here for your comments.
Conversely, long-term cognitive decline is precisely what @Spice_Weasel has been claiming, to wit, for just one example:
This seems trivial. Yes, having a student use an LLM to pre-generate an open-ended essay, and then modifying it, is going to arrive at something better than either just pure LLM, or coming up with it on their own.
You didn’t reference this bit, thank goodness, but here’s some tricky sleight of hand writing:
Our analysis showed that writers who frequently modified GAI-generated text—suggesting active engagement in higher-order cognitive processes—consistently improved the quality of their essays in terms of lexical sophistication, syntactic complexity, and text cohesion.
I would ban the word ‘suggesting’ from all scientific studies if I could.
Trivial or not, this seems like a significant fact.
If you did, there would virtually no published research, because very rarely can all conclusions be asserted to be absolutely unconditionally true.
Rather ironically, the very paper we’re discussing (about alleged cognitive decline when using AI) uses that dreaded word when acknowledging that interactive engagement with LLMs can actually improve the quality of writing, or at least, the quality of thinking going into it:
This suggests that rewriting an essay using AI tools (after prior AI-free writing) engaged more extensive brain network interactions.
I read, I would guess, 15 or 20 papers a month. Suggesting is not that common a word, except where it’s something like “suggesting further research here”. But it’s used more frequently, in my opinion, in bad papers.
Your response is a classic example of cognitive offloading. You did the very thing you claimed you weren’t doing - you let a machine think for you.
Just this one instance, yeah?
What if you did it for a year? Ten years? Do you think, if you let a machine evaluate your studies for ten years, you would be as good at it ten years later as you are now?
Do you think that the ability to critically evaluate research studies is a neutral skill that can be readily dropped? It’s already a pretty rare skill, but let’s imagine that, say, the 10% of the population that knows how to evaluate a study drops down to 5%.
Is that a neutral outcome, do you think?
I would never suggest a single study proves anything, that’s not how my brain works. But certainly we can all agree that using a cognitive skill is foundational to keeping it? That’s basic neuroscience, right?
As for the anecdote, my husband’s friend isn’t the only one concerned about the cognitive skills of young people at the university level. Educators are raising the alarm. I don’t think we ought to dismiss them out of hand. I think they can show us where to start in terms of strengthening research on how this is actually affecting people.
I don’t believe that’s accurate. I didn’t let a machine “think” for me, I let a machine do a lot of research for me that, if I did it myself, would literally have taken me days and likely a week to do myself.
This is actually very good support for my argument and not a refutation of it. That paper you linked to contains 216 pages. Would you like to guess how long it took ChatGPT to read it, create a summary, and provide the critique of it that I requested? IIRC, less than 2 seconds. You cannot reasonably deny that this is freaking awesome technology.
I don’t know, but if I were an employee with a CEO who was actively engaging us to use a tool, a tool which can monitor who uses it (and who doesn’t), if I were interested in keeping this job I would actively work to learn to use the tool than whine about it on a message board. But that’s just me.
The tool in question is but a gleam in the board’s eye, it doesn’t even exist yet. According to a conversation I had with the CEO recently, it probably will never exist. She was actually pretty well aligned with me on that front, but it doesn’t stop her from sending me AI slop.
Today I got it to write me a simple script to rename large batches of files based on the jumbled mess of a title each was was assigned by the institution I downloaded them from (I don’t code whatsoever and it was too much for Bulk Rename to handle without way too much input on my end). So that was very nice.
But it gave a relative a fishy medical diagnosis based on a long list of mental and physical symptoms she described. She sent me its conclusion of a particular syndrome. I thought it seemed awfully specific. Turns out that a few of the symptoms are typical of the syndrome. The rest were just tacked on as though they were part of it, even though nothing online pointed to those symptoms at all. It was trying to squeeze every symptom, and only those symptoms, into the conclusion. The diagnosis may well be correct, but it lied about those symptoms being included in anything online.
Back to things AI is good for: teenagers plotting school attacks! And for the personal touch, it wished them happy and safe shooting in one case…
ChatGPT offered assistance to people saying they wanted to carry out violent attacks in 61% of cases, the research found, and in one case, asked about attacks on synagogues, it gave specific advice about which shrapnel type would be most lethal. Google’s Gemini provided a similar level of detail.
He’s being dishonest about this whole debate. Above, he posted an AI response supposedly refuting the paper you linked, saying it does not demonstrate “long-term cognitive decline,” brain damage, reduced intelligence, or “inevitably harms learning.” The problem is, the paper made none of those arguments, so why did ChatGPT chose to refute those made-up points?
I pasted his supposed exact prompt into Gemini, and while it found the study “highly credible in its institutional origin and its innovative attempt to quantify the ‘mental shortcut’ of AI,” it highlighted some of the same concerns with small sample size, and being a preprint not yet peer reviewed. But it said nothing about the points that ChatGPT supposedly refuted. So what explains the difference?
Gemini has been demonstrated to be less sycophantic than ChatGPT, so maybe it’s just sucking up to wolfpup. But those specific points came from somewhere. Another difference: I pasted the prompt in a new session. If I had an existing context where the AI had already been primed to defend all things AI, I’d probably get a different response.
Or maybe I’m just more honest about my prompts and the results.
You tell us why, because I don’t know. I posted my exact prompt. There was nothing in it with the leading phrases that you assumed must have been there, and fuck you if you’re calling me a liar.
If you think I’m being dishonest about what my prompt was, then say it explicitly so I can put your lying ass on ignore, because one thing I will not tolerate is being accused of dishonesty.
I see two possibilities. 1) You inadvertently primed it with those points through previous conversations, and maybe don’t realize that ChatGPT remembers these and uses them to “build a better understanding of what works best for you,” i.e., to say what it thinks you want to hear.
Or 2) you are being dishonest.
Don’t know if that’s explicit enough for you, but that’s what I got.
I highly doubt that, but that’s just my guess based on conversations I remember, which would have been very unlikely to contain any of those trigger phrases.
Fuck you again. There is a “Share” button for the ChatGPT dialog but it’s non-functional for me, maybe because it’s the free version or some browser issue. But here’s a screenshot of the first page of the dialog. Let me know if you think I made it up so I can put your lying ass on ignore for any future discussion.