I’m not sure who “we” is, but please don’t put me in that group.
Whether you want to be there or not has little bearing.
Magic isn’t real. So, in a non-magical world, we have to assume a mechanical underpinning to intelligence and sentience.
LLMs might not have some structural system that links neurons together in just the right way to allow for sentience. But I’m not aware of any reason to think that there is such a structure not any reason to think that an LLM couldn’t have it.
So while it may be that such an experiment wouldn’t succeed, until the point that the experiment is done, hypothecating a failure would be contrary to what we know.
Like, in the science of the 1500s, you give people food and they survive. From everything you know, then, if you could keep a supply of food unspoiled indefinitely then people should be able to survive indefinitely at sea.
You run that test and, perhaps, you discover scurvy. But, up until that moment, the prevailing theory would and should be that food = survival. It’s not guaranteed to prove out, but unless you know what to point to as the flaw, presuming a flaw is faith not fact.
Of course it does.
You said “we’d expect…”.
I’m saying “I would not expect”.
Pretty simple, actually.
An individual might expect to lift off in a rocket ship and see the Earth below as a flat disk on the back of a turtle, floating through space. That has no bearing on what we’d expect him to see.
What?
The level of evidence we have that food is critical to survival is several orders of magnitude greater than the evidence that some non-biological network could gain sentience (as far as I know, this has never happened?) How could we rationally assume that’s the most likely outcome?
This isn’t true. I am focusing on LLMs because that is the example that @wolfpup was using but even chatbots using multi-modal models are still built on top of LLMs and share the same capabilities and fundamental limitations, particularly with the inability to distinguish between factual and non-factual responses, and in ‘hallucinating’ erroneous responses in lieu of acknowledging a limitation of its knowledge domain.
I don’t think anybody would seriously question the ‘emergence’ of unexpected and in some ways novel capabilities as the complexity of LLMs based upon the size of the training data and scaling of parameters have increased although the true enabler of these systems to process and generate natural language at all is really a consequence of transformer architecture and feedforward/backpropagation mechanisms. But the question remains from where these ‘emergent’ abilities come from; is it just some mysterious natural order achieved with meeting a threshold of parameters and the sophisticated way an LLM can tokenize data and make associations? If it were, then we would expect coherence and quasi-deterministic behavior to emerge spontaneously from the system, but in fact, it doesn’t; without doing an enormous amount of data selection and reinforcement training, as well as imposing attention mechanisms these systems almost immediately start going off the rails and produce gibberish, and even with those methods they still ‘hallucinate’ and confidently produce factually incorrect information even when the training data is curated to ensure that it isn’t being presented with false and misleading information.
In fact, all of the ‘reasoning’ capability (yes, including being able to solve math problems) comes from the very sophisticated statistical word association from which ‘rules’ about not only the the grammar and syntax of language but the ways in which it is used in the training data emerge. This really isn’t a novel concept for computational linguists, nor is the use of language a requirement for ‘intelligence’ (at least, as defined by an ability to adapt behavior patterns to environmental or other stimuli). And as any cognitive neuroscientist will explain, the human brain (and that of other animals) does not use backpropagation to learn information or require discrete attentional mechanisms to contextualize information. If LLMs were becoming sapient they would not be doing so in a manner similar to how animals are, but in fact there is no real evidence of actual sapience at all; just a simulacrum of it based upon competently generating natural language responses.
I’m not going to go into the entire history of Sam Altman and OpenAI here but suffice it to say that while he’s very, very good at selling OpenAI as a benevolent cultivator of artificial intelligence and himself as an altruistic patron, he has a very sketchy history of deception, self-dealing, and flagrantly lying and talking out of both sides of his mouth to tell people what they want to hear while shielding actual development from public view and dismantled the existing OpenAI safety board to be replaced by a more compliant and profit-focused board. OpenAI as a ‘non-profit’ entity overseeing the actual development of LLMs and AI tools will essentially be a sock puppet when and if the SoftBank and MicroSoft OpenAI is very much focused on becoming the most powerful and highest valued company in the AI marketplace with Sam Altman as the titular king.
If you want to look for an ethical AI developer I guess you can probably look to Anthropic, which is at least structured as an actual public benefit corporation which develops tools to actually evaluate LLMs instead of just hyping them, although I have my skepticism about their ability or willingness to put constraints on Claude and other AI systems when questions of profitability are at stake. But Anthropic is very much a minority player, destined to be marginalized in the race to commercialize these systems whether there is even a way to validate their safety and reliability or not.
Getting back to the original topic, although ChatGPT and other current chatbot systems are vastly more sophisticated than ELIZA, which was basically just a text parsing system with no kind of emergent capacity, the perception of its sapience and ability to make emotional connections rests on the same human proclivity to ‘find’ patterns and see intentionality where none exists that caused people to form a one-sided relationship with their PDP-11 terminal. Chatbots are more convincing because of their sophistication in the use of language and also the training data which doubtless contains a vast array of examples of manipulative language, and responses with pseudo-romantic tenor or false empathy are among the best ways to maximize for user engagement, which is the real objective of any chatbot by design.
That this would impact real-world relationships in an era where loneliness is found to be so pervasive as to be described as a mental health epidemic and social media has already conditioned people to viewing online ‘relationships’ as more open and genuine than those with real human beings is hardly surprising, and indeed is essentially a logical progression. This doesn’t mean that those in-person relationships don’t already have problems (perhaps fundamental ones) that these systems are just exacerbating, but as an outlet that is a supplicant and that can mirror the user’s ideas and worldviews without imposing any judgement or contractor notions of its own chatbots are kind of a perfect enabler of the most narcissistic behaviors and emotional discord possible, and because most users are not sophisticated in their understanding of how these systems work they are easily fooled into believing in the magical friend always available to them whenever they pick up their phone.
Stranger
The theory was that any particular supply of “food” that doesn’t spoil will allow you to survive indefinitely. While that’s true in an abstract sense, in a more specific sense it was disproven when we started sailing. “Food” isn’t just some arbitrary collection that can be packaged up in a single word. You need to separate it out into subgroups and make a purposeful selection if you want to have a set that you can survive off of indefinitely.
Likewise, we currently have no reason to think that sentience is anything but “lots of neurons + exposure to a rich environment”. Any attempt to actually test that may prove that there’s something more to it. At the moment, though, I’m not aware of anything but an emotional sense that there must be some “soul” or “magic” that sits on top of that to get to sentience.
As said, that would be a matter of faith, not science.
But again, saying that the science isn’t in doesn’t advance any outcome as being more likely. I can say that I have skepticism that you’ll survive on an infinite supply of air-tight jars of olive oil, in 1500. But, without the knowledge of vitamins, to some extent that’s just recognizing the limits of our knowledge, not of knowing that there really is some true flaw in the attempt.
I would not be, in the slightest, surprised to discover that LLMs can’t become sentient. But, until we have a reason to think that they’re anything but a silicon recreation of a human brain, we have no reason to think they can’t become sentient, given the right combination of quantity, structure, and exposure to a real and complex world.
You just described some of my human (as far as I know) in-laws.
I can’t parse what you mean by “natural order” in this context, so I’m fairly certain that I’ve never argued that AI behavior does stem from a “mysterious natural order”.
Did Wolfpup say anything about sapience? I certainly didn’t. To me, what is interesting about the Chinese Room thought experiment and about ChatGPT is the fact that it displays reasoning and problem solving capabilities that one would traditionally associate with a sapient, thinking mind without actually possessing one of those.
Note that I should probably specify something more like convolutional neural networks or something but close enough.
Hmmm. Why do you argue that that should be our null hypothesis, which we should automatically default to because we “have no reason” to think otherwise? It certainly doesn’t seem to align well with what we know about the only situation where actual sentience did develop.
Namely, the development of sentience in some living organisms—humans at least, with varying opinions on which other complex-brained animals qualify for that category—took a lot more than just neurons and a vaguely defined “rich” environment. There was a tremendously long span of tremendously complicated evolutionary pressures upon a combination of very diverse organic systems interacting in lots of chemically diverse ways.
Why should we assume by default that the far more restricted developmental pathways of LLM software running on silicon chips should be expected to produce a similar result in terms of sentience? You don’t have to have any kind of supernatural belief in a “soul” or “spirit” or noncorporeal “mind” in order to find that assumption rather unconvincing.
I think you might be having a bit of “brain-in-a-vat” mindset going there, where we focus so much on the fact that thinking occurs in the electrical impulses of the networks of the brain that we jump to the conclusion that having a brain-like network of electrical impulses is all that we’ll need, in terms of a “recreation of a human brain”, to produce thinking.
In short: While I incline to agree with you that the emergence of true sentience in some kind of AI at some point is conceptually possible, I disagree with you that nobody could have any reason to think otherwise unless they were entertaining magical beliefs about the supercorporeal nature of thought or consciousness. ISTM that there are plenty of potential reasons for even the most hardcore rationalist materialist to think otherwise about that hypothesis.
Seriously?
Do you understand that we know for a fact the shape of the Earth? I read books, too, but I understand that they are fiction.
On the other hand, we (and quite frankly this particular we includes you) don’t know for a fact that LLMs (with all of the sensors your heart desires) will become something like us, minus hormones. I highly doubt it, but they might. It’s absolutely not a given no matter how many times you want to pretend it is and I’ve yet to see an actual LLM expert make such a claim.
Either I’m misunderstanding you (certainly possible, but you’re not helping) or your posts are so far out there that the TimeCube guy is starting to sound rational.
Let’s say that 10% (made-up number) of people will become alcoholics at some point in their life. At age 12, basically everyone will say, “Not me. I won’t drink. I won’t become an alcoholic.” But the fact is, 10% will. The vast majority - 90% - at the moment of their death, will prove to have been right about the alcoholism thing and wrong on the not drinking thing.
When they said that they couldn’t become an alcoholic, they were wrong. They had a strong probability of not becoming but, to say that it won’t happen is wrong since it really did have every chance of happening (for a very low level of chance).
Likewise, if you were to say that George Clooney can’t be behind that there door, then you are wrong. He probably isn’t, and if we go over and test it, we shouldn’t be surprised to not find him. But you should expect that he could be there because, factually, we have no reason to think that he couldn’t be.
More importantly, when we open the door and don’t find Clooney, that’s not evidence that he can’t have been there. Clooney is a moving target. He’s not an immovable stone. So, sure, he wasn’t there right now and he’ll probably never be there behind the door, but until the moment that we have a way to demonstrate that he can’t be there, you’re just blowing hot air every time that you announce the impossibility.
Similarly, if you show me an ant and say, “This has neurons and exposure to a rich environment and didn’t become sentient.” Well, yes, that’s true. But no one said that it always happens nor that it happens easily. But in the specific case of humans we do know that it did happen and it’s just being foolhardy to point at ants and say, “So it couldn’t happen with this other thing either.”
It probably won’t. But if you decide that that’s true, and suddenly it happens, then you’re not going to be prepared.
So in particular, if we know that it could perfectly model an ant, and it’s already spookily good at accomplishing tasks that, previously, only human intellect could accomplish, despite living in a vacuum and only having been fed a 99% 2nd hand writings on life, and we don’t have any true proof that it couldn’t happen, we really should be prepared for the eventuality.
That’s precisely the definition of “weak emergence” as described by David Chalmers. He contrasts this with “strong emergence” where “the high-level phenomenon arises from the low-level domain, but truths concerning that phenomenon are not deducible even in principle from truths in the low-level domain”. In weak emergence, “the high-level phenomenon arises from the low-level domain, but truths concerning that phenomenon are unexpected given the principles governing the low-level domain”. That is, nascent signs of the emergent phenomenon might, in hindsight, be found in the low-level domain, but would not normally be expected to lead to a prediction of the full-blown emergent capability.
Chalmers also notes, correctly, that strong emergence is the type most frequently discussed in philosophy (and, I would say, usually dismissively) while weak emergence is the type most often invoked in scientific discussions of complex systems theory (where it’s usually just called “emergence”).

If your hypothesis is that LLMS improve gradually as the corpus grows in size, I don’t have any disagreement, but I’m not going to start calling it emergent behavior, nor are the authors of the paper that claims LLMS have emergent abilities.
It’s not just the size of the corpus, but the scale of the whole thing, most notably the size of the neural net and the number of parameters, which in GPT-5 is rumoured to be in the range of about 1.5 trillion. And your use of the word “gradually” here is inappropriate as the question of whether certain capabilities that arise with scale are gradual or discontinuous is at the heart of the entire controversy.
And I don’t know how you can claim that the authors of the paper believe that all LLM capabilities always arise gradually (if I’m parsing your sentence correctly) when the whole entire point of the paper is that sometimes they don’t, and the emergence is discontinuous and appears to arise suddenly, and in fact they engage in speculation about why this might be so.
Furthermore, the authors are not ignorant of the hypothesis that emergent capabilities are all a mirage caused by evaluation metrics, which was the basis of the subsequent rebuttal to the paper. They explicitly mention it, but don’t believe it’s an adequate explanation.

I should note that your ChatGPT response with the nuanced view is on the same page as myself.
How is this possible when it’s been claimed, right here in this thread and also in many others, that ChatGPT knows nothing, understands nothing, and is usually wrong about everything?
Seriously, I think the GPT response I put in that “Summary” box is a really excellent summation of the whole issue, and I fully agree with it. I’m glad you do, too.
Just to be absolutely clear, since you spent a lot of words telling me how LLMs improve (again, I’m pretty familiar with them) and then veering off into Chalmers-land, is the below your stance on where we currently stand in regards to LLMs’ abilities at this point in time?
- If by “emergent properties” you mean literal discontinuities in capability: the evidence suggests no, that’s mostly a measurement mirage.
- If you mean qualitatively new behaviors that appear when quantitative changes cross certain thresholds: yes, those are real in a functional sense, but they arise from continuous internal improvement.
Yes or no, please.
If yes, then we’re in agreement. I’ll think you’re prone to sensationalism, but otherwise, sure. That’s not how “emergent properties” is used in the Google Research emergent paper, but sure.
I already said I agree with the GPT assessment of the issue, so, yes. I also said, in the immediately preceding post:

… nascent signs of the emergent phenomenon might, in hindsight, be found in the low-level domain, but would not normally be expected to lead to a prediction of the full-blown emergent capability.
So, to fully qualify my view on this, I’ve long been a strong believer in the principle that a sufficiently large quantitative change in the scale of a complex system can lead to qualitative changes in behaviour. This is at the core of what we call emergence. To me this is so obvious that it needs no further argument, but there are millions of examples, from computational technologies to biological brains.
The question of whether these improvement are continuous or not is relatively unimportant, particularly because they may remain latent and unobserved until a suitable level of scale is achieved, so the continuous/discontinuous distinction is something of a red herring.
Too late to edit, but for added clarity, I just want to comment on this:
If you mean qualitatively new behaviors that appear when quantitative changes cross certain thresholds: yes, those are real in a functional sense, but they arise from continuous internal improvement.
Superficially, one might sense a contradiction here. If “qualitatively new behaviors … appear when quantitative changes cross certain thresholds” (a statement which we all seem to agree with) then how can this be consistent with “continuous internal improvement”?
The answer is that these “continuous internal improvements” may be nascent and unobserved. An LLM at some level may fail to solve a certain class of logic test. At a higher level of scale it may solve a few but mostly gets them wrong. At a still higher level it solves a lot of them, but still gets many wrong, leading to the inevitable mockery from skeptics that it may superficially appear to know what it’s doing, but really doesn’t. And then, at a still greater level of scale, suddenly, bang! It gets them all right, and aces IQ tests! That “bang!” moment is what’s important.
Here’s emergence described a different way. The cerebral cortex of a cat has approximately 250 million neurons (0.25 billion). The human cerebral cortex has roughly two orders of magnitude more, maybe around 25 billion. The basic biology is the same. The intelligence level (usually) is quite different!

Here’s emergence described a different way. The cerebral cortex of a cat has approximately 250 million neurons (0.25 billion). The human cerebral cortex has roughly two orders of magnitude more, maybe around 25 billion. The basic biology is the same. The intelligence level (usually) is quite different!
This seems like a less good illustration of emergence. Yes, intelligence/sentience in the human brain is an emergent property. But no, comparing it to the feline brain doesn’t demonstrate that. Human and feline brains are not at all the same system. Unless you’re presenting them merely as stages in some very loosely identified “system” of mammalian brain evolution in general, which I don’t think is very persuasive.

On the other hand, we (and quite frankly this particular we includes you) don’t know for a fact that LLMs (with all of the sensors your heart desires) will become something like us, minus hormones.
To add to this… My understanding is that human hormones are pretty critical to brain function, too. I wouldn’t be so sure that removing them is a trivial difference. Something without hormones would arguably be nothing like us.

And then, at a still greater level of scale, suddenly, bang! It gets them all right, and aces IQ tests! That “bang!” moment is what’s important.
I’m still learning about all this.
How do you know that’s a continuous internal improvement and not an emergent property? Has this ever happened before?

Something without hormones would arguably be nothing like us.
Certainly possible. But 1) “sentient” and “like us” are different metrics, and 2) as the only peer, far outnumbering the AI, and the only source of information for how to do things, plus the ability to choose the training protocol, there’s not a bad chance that it would act in a way that’s derived from us, inheriting behaviors that were influenced by hormones. Granted, a half-wolf dog could still go feral and hurt someone. I’m not saying that we can know what the impact would be of a creature lacking hormones, I’m just saying that it’s not a given that it would behave any differently than our current LLMs, just more “sentient”.
But, unless we believe that hormones are an element of sentience, then we don’t currently believe that a lack of hormones is a blocker for sentience.

To add to this… My understanding is that human hormones are pretty critical to brain function, too. I wouldn’t be so sure that removing them is a trivial difference. Something without hormones would arguably be nothing like us.
Take a step back and think about what hormones do, when it comes to brain function.
Neurons interact with one another based on the structure of the neurons in the brain - the neural pathways, where each node connects to, how long the distance between connections are, etc - these are all physical properties of the neuron that impact how it behaves and thus the emergent mind that forms from those interactions.
Hormones impact how the neurons behave. They might make some neurons more excitable and others less, and a host of other changes that I doubt I am equipped to communicate properly or that I have a remotely full understanding of. For example, fight-or-flight is associated with a hormone called norepinephrine; without getting into technicalities I barely understand, I think it’s clear that norepinephrine and hormones like it change the way that neurons behave in ways that directly give rise to the altered state of consciousness that occurs in fight-or-flight mode.
So in order for a non-biological intelligence to act similarly to humans, one would expect that they possess a mechanism that’s functionally similar if not mechanically so, by which I mean: a network of virtual neurons could achieve a similar effect with virtual hormones - changing the way that virtual neurons interact with one another.

But, unless we believe that hormones are an element of sentience, then we don’t currently believe that a lack of hormones is a blocker for sentience.
Of course hormones are an element of (our) sentience; hormones are integral to the way that our brain functions.