I use ChatGPT for lessons and it saves a lot of time. It’s not perfect and it needs to be checked but the amount of time required to check is less than what would be required to do everything myself.
One of the problems is when people assume it’s more accurate than it really is and they rely on it without doing due diligence.
That currentaffairs article really helps lay it down, hard and thick and sticky. Someone I doubt that pollyannish Pay Noah Tension blogger with the graph up there ever worries or comments on or even acknowledges such things…
My work takes between 40 and 60 hours per week most weeks. Some tasks AI cannot do (teaching), some it can (lesson prep, writing homework assignments, grading them). I weigh the potential time saved vs the benefits of practice to keep things fresh vs my moral compunctions about the ethics of AI, and I come down to I’d rather spend that time working.
The irony is that I think a lot of people are uneasy about AI but haven’t managed to find a way to articulate this uneasiness, and so do not feel confident in sharing it. I think we need a way of talking about AI that licences our inarticulacy and reminds us that the burden is not on us to disprove its usefulness, but on its proponents to convince us of its value and how this value justifies its cost.
From my vantage, the cost seems pretty damned high.
Thanks for sharing this article. It includes a study I was looking for:
A recent MIT study, “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task,” provides sobering evidence. When participants used ChatGPT to draft essays, brain scans revealed a 47 percent drop in neural connectivity across regions associated with memory, language, and critical reasoning. Their brains worked less, but they felt just as engaged—a kind of metacognitive mirage. Eighty-three percent of heavy AI users couldn’t recall key points from what they’d “written,” compared to only 10 percent of those who composed unaided. Neutral reviewers described the AI-assisted writing as “soulless, empty, lacking individuality.” Most alarmingly, after four months of reliance on ChatGPT, participants wrote worse once it was removed than those who had never used it at all.
The study warns that when writing is delegated to AI, the way people learn fundamentally changes. As computer scientist Joseph Weizenbaum cautioned decades ago, the real danger lies in humans adapting their minds to machine logic. Students aren’t just learning less; their brains are learning not to learn.
As is often said on this board, your sarcasm meter needs re-calibration. Here is exactly what I said, firmly tongue-in-cheek:
It’s not a serious recommendation that AI “should be allowed to vote”, it’s clearly a put-down of the factual ignorance of the typical Trump voter about … well, pretty much everything!
Which I explicitly explained later to those with broken sarcasm meters:
I continue to be impressed by the desperate efforts of neo-Luddites to try to condemn AI for any and all possible reasons while also condemning anyone who sees any value in it whatsoever, or any positive future in it.
Well, we all continue to be amused by your repeated use of this insult to avoid addressing the actual concerns people in this thread have with AI.
I use AI daily in my job. As a product manager, a big part of my role these days is figuring out appropriate uses of AI in my products, which requires me to be aware of its capabilities and limitations.
Continue pretending you’re the only person here who understands AI because you chat with ChatGPT and read the titles of a few research papers. You’re missing a lot of thoughtful discussion about some real-world problems with it.
The concerns that people have are legitimate and increasingly borne out by the research, but you seem completely incapable of recognizing problems with anything you think is cool. That is an entitlement problem. You think you should get to enjoy your thing without having to reckon with the way it’s reshaping society and the environment, and not often for the better. The same way you want to enjoy Bill Maher even though he’s actively destructive to America and a flaming asshole.
Not only do you downplay the problems with AI, you make claims about it that you apparently can’t support with evidence (or in the very least, that you refuse to support with evidence) and then complain when nobody believes you. Do you understand that the onus is on you to prove that your claims are true? If you don’t take that seriously, why even post here?
Have you even read any of the critical articles people have posted? @Maserschmidt’s education article? All the in-depth stuff I posted about the local environmental impact of building all this infrastructure? The fiscal unsustainability of the business model? The social and political ramifications of the commodification and corporate control of information? I’m guessing not. Then why the fuck should I read your cites, other than to point out they contradict your claims? Are you aware there is a complete disconnect between what you’re claiming and what you’re citing? Not just in this thread. In all the threads. No reasonable person could conclude anything other than you don’t know what the fuck you’re talking about.
There was this guy in my writers group, he fancied himself a literary writer but his writing was boring as fuck, and after three years it wasn’t getting any better. He was always moping about being unpublished. So I read a chapter of his favorite literary novel and then I tried to use it to explain to him how to make his writing better. Nothing seemed to take. Instead he started to flame out. At one point my friend said to me, “You are putting more effort into his writing than he is.”
Eventually he started a flame war and we kicked him out. Hopefully I don’t have to connect the dots.
Yeah I’ve been coming back and browsing this thread now and again and that little gem got my ire up.
Being displeased with enshittification, and concerned about the major environmental, social, infrastructural and economic implications of the AI hype machine doesn’t make one a luddite.
Spice Weasel says it all better than I can though.
I am about at my limit. Many posts ago (same thread? Can’t remember) I asked him for podcasts, newsletters or other sources where I could learn more about AI. Crickets.
I’ll open this up to everyone - got any recommendations? Right now I’m listening to Cal Newport’s Deep Life Podcast, where he throws discussions of AI into his productivity advice. I’d consider him AI neutral, though he particularly loaths media misrepresentation of AI research and what he calls “vibes-based reporting.”
Beyond that I’m learning a tiny bit from the Economist and a tiny bit from Hank Green YouTube videos, but there must be more in-depth educational sources that aren’t so tapped into the hype.
Look, you’re a valued poster here and deserve to be treated respectfully. But since you insist on perpetuating this sort of inexplicable argument with ever-increasing hostility, let me remind you of something you said just recently:
This is so completely wrong that it’s irritating to read. It takes a special kind of hubris for someone to make a statement like that, completely made up out of whole cloth, completely out of your own head, with no evidence for it whatsoever, and then accuse me of not providing sufficiently good cites!
It’s not even a question of what specific material GPT may or may not have been trained on, it’s the implication that the only reason it did well on all those tests (and there were many of them, across a large number of fields of knowledge) is that GPT was working from a gigantic crib sheet that had all the answers. It’s the implication that GPT has no problem-solving skills whatsoever, which is an outrageously false claim. It was true for Eliza in 1964, not for GPT-5 in 2025.
Actually, no, it came out of the head of a professor of computer science at Georgetown regarding an article that claimed LLMs could replace math PhDs. I was talking about that, you were talking about something else. You may notice I read your cite and responded specifically to the information in your cite and provided the context of what I was referencing in my previous post in order to further clarify the conversation. I moved forward with the assumption that the article you cited was true, though I didn’t find it particularly illuminating with regard to what the machine is actually doing and I have no compelling reason to disbelieve the computer scientist telling me it’s not reasoning. Nevertheless, I adjusted my opinion to incorporate new information. That’s what you do when you’re trying to have an intellectually honest conversation with someone.
Go back and look at your original post that I took that quote from. I assume you’re just confused about what was being discussed there. If you were talking about LLMs replacing math PhDs, how do account for the statement “Passing the bar when you’ve been given all the answers to the bar is not that impressive to me.”. What does that have to do with math PhDs? That whole section of your post was being dismissive of my previously cited Business Insider link about all the professional certification tests that GPT had successfully passed, in some cases with scores in the 90th percentile or better. “It’s not ‘thinking,’ it’s regurgitating the information it was fed” is equally wrong about those test results. (I don’t claim it was “thinking” in any human sense, but I do claim it was exhibiting apparent reasoning capabilities and very real problem-solving skills, rather than “being given all the answers” and “regurgitating the information it was fed”.
A fair point. What I meant is that I’m aware that AI can search for and identify patterns within large data sets in a way that human beings could not, simply because of the speed at which they work, and so there are limited uses for which they are good.
But the way in which the individual casual user is encouraged to use AI seems to me quite a different beast, and I have yet to find an occasion in which it would make my life better, once I factor in the ethical considerations. (AI-generated art is a really fun thing to play with, but ultimately it seems to be stealing from actual artists, and making it even more difficult for actual artists to make a living.)
My view is that there’s no practical difference, but not everyone agrees. But since it’s indisputable that modern large-scale LLMs can solve many difficult problems that even most humans would have trouble with, it’s less provocative to make the more careful factual statement that when an LLM solves a problem, it appears to be using logical reasoning, but it really isn’t, or at least, it’s radically different from human reasoning.
You are correct about my viewpoint, I am still largely unimpressed, I guess that’s the part you have a problem with. But you keep making the claims that it’s doing something beyond regurgitating input without supporting them with evidence. You’ve indicated that it can do well on some standardized tests but you have not indicated how it arrived at those answers. And given the choice between a guy on the Internet who refuses to provide adequate citations and a tenured full professor of computer science who runs a highly-rated technology podcast, who says it is, in fact, spitting out answers based on its input, I’m going with the actual researcher. If you really care so very much about the maligning of AI you will provide accurate academic information from a trustworthy source for me to consider. I’ve been asking for trustworthy sources since this thread started because the thing you don’t seem to understand is that I’m trying to learn more about this. If you sense any hostility it is because you are frustrating that attempt.
Perhaps my ultimate goal, to find additional reliable sources from experts that aren’t influenced by corporate hype, is a lost cause. But hope springs eternal.