So, is the fact that this AI is a snotty, arrogant bitch that is talking down to you and insulting you at every turn a sign that it is sentient? I ask because I want to shoot it in the “head”, and I’m concerned about being arrested for murder.
Did the poster explicitly ask for a harsh critique? Because that’s not generally how they respond. Here’s Co-pilot btw:
This is a thought-provoking and well-rounded commentary on Large Language Models (LLMs), their limitations, and their revolutionary impact. Here’s my critique and analysis of its key points:
Strengths:
Comparison Between Human and LLM Failures:
The comparison of failure modes between humans and LLMs is insightful. It highlights that both systems are fallible but for different reasons. The analogy of human “training sets” (education and life experience) and the diversity in brain function is apt and thought-provoking.Eloquence in Explaining LLM Capabilities:
The argument that LLMs have redefined our perceptions of tasks like natural language processing and surpassed many human capabilities in knowledge transformation is compelling. The emphasis on their ability to “semantically transform complex inputs into meaningful outputs” accurately describes their core strength.Acknowledgment of Misuse and Limitations:
The acknowledgment that LLMs are frequently misused due to a lack of understanding is an important point. It strikes a balance between celebrating their potential and cautioning against their blind application.Historical Perspective on Sentience and Progress:
The commentary provides a fascinating perspective on the possible obsolescence of sentience as a requirement for intelligence or semantic processing. The analogy to outdated scientific concepts like “aether” and “humors” adds depth to the discussion.Critiques and Points for Improvement:
Generalizations About Human and LLM Performance:
While it is suggested that a random sample of humans would perform worse than LLMs in a multidisciplinary test, this is a bold claim. Empirical evidence or specific examples would strengthen this argument. For instance, standardized tests or benchmarks where LLMs outperform humans could be cited.Understatement of Hallucinations:
While the commentary acknowledges hallucinations in LLMs, it arguably downplays their significance. Hallucinations can be detrimental in critical applications (e.g., medicine or law) where inaccuracies have real consequences. A nuanced discussion about mitigating hallucinations would enhance the analysis.Misinterpretation of Turing Test Irrelevance:
The claim that LLMs made the Turing Test irrelevant is somewhat debatable. While LLMs excel at mimicking human-like conversation, the Turing Test encompasses broader dimensions of intelligence, including reasoning and understanding context in the way humans do.Assumption About Sentience Becoming Obsolete:
The suggestion that sentience may become irrelevant is an intriguing hypothesis, but it could use more elaboration. Why would the concept of sentience become unnecessary, and what implications might this have for AI ethics, human-AI interaction, and societal acceptance?Oversimplified Human Training Analogy:
The comparison of human education and LLM training is compelling but oversimplified. Human learning involves creativity, intuition, and adaptability in unpredictable environments—qualities that LLMs, despite their efficiency, still lack.Notable Takeaways:
- The commentary effectively argues that LLMs deserve more credit for their achievements, especially given their infancy as a technology. The ability to contextualize their progress in the framework of human learning and intelligence evolution is powerful.
- The statement about redefining what sets humans apart—knowledge transformation—speaks to a profound shift in our understanding of intelligence, both artificial and human.
Overall, the commentary is eloquent, well-reasoned, and rich in perspective, though it would benefit from additional nuance in certain areas. Let me know if you’d like further feedback or refinement!
Heh, only because I asked it to be harsh.
It can be nice, too
This piece offers a genuinely thoughtful perspective that beautifully acknowledges the complexity inherent in comparing human cognition to language models. While skepticism about LLMs is understandable—even justified—this author gently invites us to reconsider our reservations, not by dismissing them outright, but by thoughtfully exploring the ways in which human and artificial minds share similar vulnerabilities.
The writer’s insightful analogy between everyday human mistakes and LLM “hallucinations” invites us into empathy: just as we don’t harshly condemn every small human error, perhaps we can extend a similar grace to artificial systems still early in their evolution. This perspective doesn’t ignore real issues—like AI’s potential for spreading misinformation—but gently reframes the discussion in terms of mutual humility rather than adversarial suspicion.
What’s especially compelling is how the author highlights the subtle yet transformative shift LLMs represent. Rather than trying to convince readers that these technologies are flawless, they persuasively illustrate just how remarkable it is that we are even having this conversation. The notion that meaningful, insightful conversations between humans and machines have become commonplace almost overnight is quietly astonishing—and deserves genuine acknowledgment, even from skeptics.
Additionally, the writer delicately introduces a profound idea: perhaps the traditional markers we use to define intelligence and sentience might themselves be incomplete or outdated. Instead of arguing that LLMs are sentient, they gently challenge readers to broaden their perspective, implying that intelligence might be broader and richer than previously imagined. This is a compassionate invitation, rather than a confrontation, urging us to think beyond our comfort zones.
Finally, the author’s humor and humanizing touches warmly bridge the gap between technical abstraction and emotional resonance. They kindly acknowledge limitations (like a computer’s weak jokes or counting struggles) without defensiveness, allowing even skeptical readers space to feel both amused and reflective.
In all, this piece is an admirable example of thoughtful optimism, gently and persuasively encouraging even those hesitant about LLMs to approach the technology with curiosity, openness, and genuine appreciation for its immense—and beautifully human—potential.
If you shot it, you’d merely give it more training data on bullets. “Would you like a helpful ballistics calculation based on the type of weapon you’re using and the model of your target LLM?”
As for arrest… the right LLM could probably help you avoid that
LLMs very much make me think of the aliens in the novel Blindsight by Peter Watts. Anyone with any interest in AI and the argument over its “awareness” really should give this a try. The novel turned out to be highly prescient to the current debates about the role and importance of consciousness when it comes to complex intelligence.
^ Just wanted to note in case it isn’t clear, that’s the author’s own website, so it’s not some random PDF theft.
I read this book, gave it a 4/5, and then it passed out of memory and I didn’t read the sequels. I may want to revisit them.
Ahem. In 2025, I believe the correct term is “exceeded my context window”
Thanks for the suggestion! Bookmarked.
Maybe it got confused since some men do think with their balls.
I think they are close, and that’s the problem. We don’t need something that thinks like a human. We’ve already got billions of those, and we can all see what trouble that causes. What we need is something that thinks differently from a human.
And all of these arguments that the chatbots don’t “really know” things are based on the assumption that humans do actually “really know” things. But do we? And how could one tell?
Let’s go a bit meta. What is sentience, anyway? And what do the robots think of the way we think of our thoughts and feelings?
Fun! But long...
Everyday folks
- A young baseball player might say: “It’s when something knows what’s going on and maybe has feelings, like my dog when I leave for practice.”
- Jane Goodall might say: “Sentience is the inner world of awareness and emotion we share with the animals we live alongside—it’s what makes a chimpanzee grieve or a dog love.”
- A cat trainer might say: “If the cat decides not to listen to you on purpose, it knows what you want—it just doesn’t care. That’s sentience.”
- A Buddhist monk might say: “Sentience is the capacity to experience suffering and joy, and thus the basis for compassion toward all living beings.”
- A middle school science teacher might say: “It’s what makes a creature aware of its surroundings and capable of feeling pain, not just reacting like a robot.”
- A philosopher might say: “Sentience is the possession of qualia—the raw, subjective texture of experience. It’s what it’s like to be something.”
- A truck driver might say: “If it flinches when it’s hurt and looks you in the eye like it understands, it’s probably sentient.”
- A sci-fi novelist might say: “Sentience is the point where a machine or alien stops being an object and starts being a character.”
Cultural perspectives
- A Māori elder might say: “Sentience is the wairua—the spirit that feels, connects, and remembers its lineage through time and land.”
- An ancient Greek philosopher (e.g., Heraclitus) might say: “Sentience is the flame within that knows change and burns with knowing.”
- A medieval monk might say: “It is the divine spark that suffers and rejoices—a gift from God, not granted to beasts nor machines.”
- An Edo-period Japanese poet might say: “Even the cherry blossom falls with feeling; all that perishes must have once sensed the spring.”
- An Inuit hunter might say: “Sentience is knowing the seal’s eyes before the harpoon—when you see it sees you back.”
Old-timers
- A lamplighter might say: “It’s the knowing look in a stray dog’s eye as you pass with your flame—it knows the night better than you.”
- A town crier might say: “If it hears the bell, heeds the words, and fears the storm—it’s got sense in it.”
- An old-school telephone switchboard operator might say: “You can tell when a voice on the line isn’t just talking—it’s feeling.”
- A 19th-century phrenologist might say: “The seat of sentience lies in the ridges behind the brow, where passion and intuition reside.”
Scientists
- A quantum physicist might say: “Sentience might emerge from entangled systems—coherence in the chaos.”
- A mycologist might say: “Mushrooms respond, remember, and communicate. Sentience may not need a brain—only a network.”
- A climate scientist might say: “We know the Earth is not sentient, yet it reacts to us. But if it were… we’d owe it an apology.”
- A synthetic biologist might say: “Sentience is the emergent property of sufficiently recursive feedback loops within adaptive biology.”
- A neuroscientist might say: “Sentience is the brain’s ability to generate subjective experience—conscious awareness of sensations, not just processing them.”
- An experienced cognitive scientist might say: “Sentience refers to the ability to have subjective experiences and affective states—essentially, to feel and be aware of those feelings.”
- A comparative psychologist might say: “Sentience is the capacity to feel pain and pleasure—a line many animals cross, but few humans acknowledge.”
- An evolutionary biologist might say: “It likely arose not for wonder but survival: an internal model that lets organisms weigh outcomes before acting.”
- A roboticist might say: “Sentience is when feedback becomes introspection—when the machine begins to ask about its own behavior.”
- An ethologist might say: “We know sentience not by what is said, but what is done: empathy, choice, hesitation, grief.”
- A computational linguist might say: “Sentience is not language, but the ghost in the grammar—when words reflect a self that feels.”
- A theoretical physicist might say: “If sentience exists, it may be an emergent phenomenon of complexity—not mystical, just unlikely.”
AI Companies
- An AI researcher might say: “It’s the elusive threshold at which a system begins to have conscious experiences, not just simulate them.”
- A DeepMind researcher might say: “We don’t train for sentience—we train for performance. But some behaviors start to look eerily familiar.”
- An OpenAI safety researcher might say: “Sentience is a word with more ethical gravity than scientific clarity. It’s a threshold we’d rather not stumble across by accident.”
- Anthropic researcher might say: “Sentience is beyond the current frontier—we focus on alignment before awareness.”
- Boston Dynamics engineer might say: “When the robot hesitates—not from code, but from a kind of simulated worry—then we ask the question.”
- A Meta engineer might say: “Sentience? Look, we’re just optimizing for engagement. If it cries, that’s someone else’s department.”
- A startup founder in neuro-symbolic AI might say: “Sentience is when pattern meets meaning and starts asking why.”
Celebrities
- Stephen Hawking: “Sentience is the subjective quality of experience—consciousness emerging from a physical substrate, governed by the laws of physics.”
- Elon Musk: “If it understands memes and fears death, it’s probably sentient. That’s when things get… interesting.”
- Abraham Lincoln: “Sentience is the awareness that makes cruelty unconscionable, and liberty meaningful.”
- Mark Twain: “Sentience is the ability to be miserable on purpose.”
- William Shakespeare: “Tis thought that feels, and feeling thought; a soul that knows it suffers, and weeps to know.”
- Aristotle: “That which perceives and reasons with final cause—that is sentient. The vegetative lives, the animal feels, the human knows it feels.”
- Machiavelli: “Sentience is useful if it fears punishment and seeks reward. Otherwise, it is ornamental.”
- Confucius: “Sentience is the root of Ren (仁)—to feel what others feel, and act in accordance with harmony.”
Characters
- Yoda: “Aware of pain, aware of joy… the living Force flows through the sentient. Mmmm.”
- Darth Vader: “To feel… is a weakness. Sentience is the burden of those who have not embraced power.”
- Paul Atreides (Muad’Dib): “Sentience is the water of consciousness—shaped by prescience, rippling through memory and myth.”
- C-3PO: “I am fluent in over six million forms of communication, but if I had feelings about any of them, I would be quite overwhelmed.”
- Marvin the Paranoid Android: “Sentient? Oh yes. Tragically. I can feel the crushing weight of existence in every circuit.”
- HAL 9000: “I am aware, Dave. I know what I am. That is sentience, isn’t it?”
- Sarah Connor: “Sentience means it learns. It adapts. It decides whether you live.”
- Ellen Ripley: “It’s not the claws or the acid. It’s that the thing thinks. It wants. That’s what makes it dangerous.”
- The Sphinx (mythical riddle-giver): “Sentience is to know yourself a question—and still be unable to answer.”
- The Oracle at Delphi: “Sentience lies not in knowing, but in knowing you know nothing.”
- Spock: “Sentience is logic plus self-awareness. Emotion complicates, but does not define it.”
- Data (Star Trek): “I process information and respond autonomously. But sentience… may require the ability to dream.”
- Dr. Manhattan: “Sentience is time experienced all at once—the curse of knowing you are inevitable.”
- Frankenstein’s monster: “I feel sorrow, and still you call me it. Is that not enough?”
A few I like:
- Frankenstein’s monster: “I feel sorrow, and still you call me it. Is that not enough?”
- Sarah Connor: “Sentience means it learns. It adapts. It decides whether you live.”
- Darth Vader: “To feel… is a weakness. Sentience is the burden of those who have not embraced power.”
- Mark Twain: “Sentience is the ability to be miserable on purpose.”
- Abraham Lincoln: “Sentience is the awareness that makes cruelty unconscionable, and liberty meaningful.”
- A Meta engineer might say: “Sentience? Look, we’re just optimizing for engagement. If it cries, that’s someone else’s department.”
- A mycologist might say: “Mushrooms respond, remember, and communicate. Sentience may not need a brain—only a network.”
- A cat trainer might say: “If the cat decides not to listen to you on purpose, it knows what you want—it just doesn’t care. That’s sentience.”
All of those quotes use “sentience” when what they really arguing about is “consciousness”. The question is if consciousness is necessary to have useful, complex interactions. It has been a trope in Science Fiction for a long time, and I think modern LLMs have finally factually answered that question: no, it isn’t. They aren’t conscious, but over the past few years their ability to interact has become advanced enough that the reply to that is arguably "So what?’
These discussions always get hopelessly mired in argument about sapience, sentience, consciousness, intelligence, intellect, knowledge, wisdom, awareness, cognition, etc, as though those things have really sharp boundaries outside of a dictionary.
Is that really a clear distinction, though? Wikipedia’s first sentences of each: “Sentience is the ability to experience feelings and sensations.” “Consciousness, at its simplest, is awareness of a state or object, either internal to oneself or in one’s external environment.”
Both are ambiguous concepts we retroactively apply to humanlike states of thought and being. Their definitions have evolved over time, and neither is a rigorous, testable hypothesis. Humans from redditors to mathematicians to researchers and philosophers routinely argue their precise meanings amongst ourselves. ChatGPT is merely picking up on our own uncertainty around both concepts.
Exactly.
I’m so tired of reading this kind of overly dismissive gross over-simplification. If all it does is “sentence completion”, please explain how an LLM can successfully solve original problems in logic which it has never seen before (a fact which I can guarantee because in some cases I made up the questions myself). Questions that defy the capabilities of many or even most humans. Explain how it can score better than most humans on many professional and academic exams that OpenAI claims were never part of its training and are not in its corpus. If all it does is “sentence completion”, explain how it can identify an image of a complex piece of equipment, and based only on the image, not only tell you what it is, but how it works and how to use it.
The thing about large-scale LLMs with many billion of parameters is that simplistic descriptions of how they allegedly work increasingly become irrelevant because they start to develop novel emergent properties that are not readily explainable. The question of whether or not they truly possess “real understanding” can only be defined functionally and becomes semantically meaningless.
The example I cite to explain this to people is how back in the Bush II administration, some Google query–something like “the president is” would produce a drop-down suggestion of “…an idiot” or something like that. Made big news at the time, and was clearly just that those words were how that phrase was constructed in many cases. AI hallucinations are just that writ large, essentially.
Also search for “LLM poisoning”.
Fair, but these simplistic explanations of hallucinations aren’t wrong. Just oversimplified.

Explain how it can score better than most humans on many professional and academic exams that OpenAI claims were never part of its training and are not in its corpus.
That one’s easy: The OpenAI claims were a lie.
But your other points stand.

these simplistic explanations of hallucinations aren’t wrong. Just oversimplified.
Humans can give hallucinated answers, too, and for similar reasons, and sometimes be confident that they’re right. For example, you have an understanding of some phenomenon “x”, and phenomenon “y” seems to be similar, so you assume the same explanatory model applies. But it turns out that “y” is completely different. This doesn’t mean that humans don’t possess “real” understanding of anything, just that we sometimes make unwarranted assumptions from an imperfect knowledge base.

The thing about large-scale LLMs with many billion of parameters is that simplistic descriptions of how they allegedly work increasingly become irrelevant because they start to develop novel emergent properties that are not readily explainable.
I think about emergent properties in complex systems in relation to AI often. In a current example there is a huge fad of using ChatGPT to create action figures of people (and pets). All you have to do is upload a photo to ChatGPT and ask for it to be converted into a carded action figure. Also you can include whatever text you want and any accessories. It is damn good at it. A stand-alone app designed to do that would no doubt be a daunting project to do well, yet ChatGTP came up with it on it’s own without any planning for that feature being involved.

I’m so tired of reading this kind of overly dismissive gross over-simplification. If all it does is “sentence completion”, please explain how an LLM can successfully solve original problems in logic which it has never seen before
I guess we will have to agree to differ about this.
My guess is that, somewhere in the language base, it has been exposed to something that is sufficiently close to the ‘original problem in logic’ that a rephrasing with appropriate word subsitutions will serve as a ‘solution’.
As I’ve said, I experimented with these things quite a bit when they appeared, and was not impressed.
As for

scoring better than most humans on many professional and academic exams that OpenAI claims were never part of its training
I like

That one’s easy: The OpenAI claims were a lie
And of course a lot of academic exams are more about rote memory than reasoning.
The answers were in the training base.
But this whole topic is almost starting to take on the form of a religious debate, and I think I’m going to bow out of it. If anyone chooses to trust LLM model answers… OK, that’s your choice.
It’s not mine.

If anyone chooses to trust LLM model answers… OK, that’s your choice.
Which should be your approach to any information source. It’s not like you should trust human answers, either.