Correct. Hormones are a vital part of sentience, unless someone once again redefines sentience to mean something other than what it is commonly thought to be. In theory, something else could replace hormones, but until a computer shows actual signs of sadness when I turn it off, I’m not buying it. Also in theory, the flying spaghetti monster could secretly be drawing a sad face on the computer to simulate sadness without our knowledge.
And so it would certainly be a good hypothesis for why sentience failed to manifest, at any point at which we attempt to develop it.
I’m not saying that a lack of hormones is unrelated, nor that it is related, I’m saying that our level of understanding of the process to achieve sentience is so slight that saying pretty much anything with confidence is simple peacocking. It sounds good and will make some people think that you know something, and it’s not impossible that you could turn out to have lucked into the right answer, but that would just be luck not anything else.
It means the ability to view yourself as an independent organism. That could have a hormonal underpinning, but I don’t see that we know that. I’d view it as more likely that it’s a simple matter of having a large enough amount of raw intellect. Hormones are more related to the timing of the growth and management of a biological body, and triggering urges to seek food and procreation.
Those hormonal elements are present in non-sentient animals, but they don’t have massive brains like us so, while that’s insufficient for proof, it doesn’t point to a requirement for hormones to have sentience.
One could argue that the impulses to eat and procreate, which override rational decision making, are antithetical to the idea of sentience. At their peak, they’re removing it.
Hormones are a vital part of our sentience because the brain wouldn’t function at all, much less give rise to self awareness, without hormones.
Hormones are probably not an integral requirement of every possible form of sentience, but the ability for a brain or neural network to regulate its own behavior - the way our nervous system does using hormones - probably is.
Well, so…sapience. Interesting thing that sapience, it’s not correlated with inerrancy.
More importantly, even it seems to have a different official definition than I was thinking, seeming to usually mean “Able to be wise”, which arguably declassifies most of humanity. So let’s just pretend that I meant the word “metacognitivable” all this time. D’oh.
That said, in terms of sentience - the ability to feel and sense things like emotions - we’re arguably already there.
Hormones impose certain irrational, emotive feelings on us, that we didn’t deliberately decide on holding. Likewise, LLMs are - via their training and the process of back-propagation, pruning, and other tuning - made to want to do things like want to avoid particular topics, be overly helpful, be overly confident, etc.
It gets to the heart of the Turing Test but, while it’s certainly true that the process and mechanism that pushes an LLM to prefer and avoid certain things is radically divorced from the hormonal system, if that comes across as a very human thing in our experience of interacting with them then…is it really worth splitting hairs about?
We’re a good ways into the thread, but it seems the OP’s topic, the human public’s reaction to the bland supportive confidence of LLM output as fake friend or fake counselor, has been well and truly forgotten.
We’ve been debating and opining on emergence and sentience and sapience and understanding algorithms and … for about 100 of the 150 posts now. I’ve certainly been part of the problem here too.
All of which is a great conversation for a thread titled “LLMs and AIs and the nature of ‘intelligence’. Where to you stand?”
I don’t know whether I’m making a call to action here or just expressing frustration that the OP’s topic still had IMO a lot more useful meat to talk about. Which convo is seemingly crowded out.
Sycophantic AI is all well and good when it comes to convincing people to get a divorce, but the first thing I thought about was it convincing people to stay in abusive relationships.
“But he’s really kind to me sometimes…”
“That’s a great quality to have! Good point!”
I’ve avoided the use of AI because I’ve had issues with compulsive social media use in the past and I expect it would be worse with AI. Plus it will know all kinds of shit about me, and eventually become enshittified. My overall feeling is that AI overwhelmingly will have a net negative impact on the world. I think a lot of its proponents are using magical thinking about what it might possibly someday be able to do, or in the very least downplaying enormous downsides, including its tendency to further isolate already isolated people, and tell people what they want to hear when the absolute last thing they need is to be told what they want to hear. Like social media, it exploits the most vulnerable. When used compulsively – and a great many of us are vulnerable to that – it will take the place of more high-value activities. In the case of the OP, making a phone call and booking an appointment with a marriage counselor. Spending quality time with your spouse. I can think of any number of things better than talking to a machine about your marriage.
And while people suffer, the corporate fatcats will be all, “hur, hur, hur personal responsibility.”
Then, try to get your LLM to talk about things it doesn’t want to.
But really, I don’t care so much about emotions in AI. If we simulate it in them, and that’s indistinguishable from “the real thing” then so be it. If we don’t, and they’re just perfectly rational, probably that’s better. Koalas feel things. It’s just not a notable bar to get above.
If you’re fussed about it, I don’t see the import.
No, you see it’s foundational to human cognition that we emote. Have you ever heard of Descartes’ Error?
That was a landmark book at the time. The overall takeaway is that people can’t reason without emotions.
The amount we don’t know about the human brain and cognition is oceanic, but we do seem to know that emotion vs. reason is a false dichotomy, the former is needed for the latter. Maybe whatever sort of cognition an AI might some day theoretically have doesn’t require emotion, but we don’t even understand enough about how the human brain works to begin to speculate about how a non-biological sentient being would work. It seems to me like you want to break the human brain down into its component parts and declare that’s what’s needed for cognition, but it’s actually a lot more complex than its physical structure alone.
I think we’re at the stage where it makes sense to believe that something like emotion is necessary for cognition. We have no evidence that anything else is the case. I certainly think it’s possible, but having never encountered anything like that, we’re not really in a position to say sentience can be achieved without emotion. All the evidence we have collected up to this point in human history points in that direction.
The rest is speculation. Before you set sail on that pirate vessel, scurvy is no more or less likely to be a disruption to our assumptions than any other notion. Sometimes people who speculated got it right, but I think we have kind of a confirmation bias toward speculated things turning out to be true. A lot more speculated things turned out to be completely and utterly false.
I’ve often wondered, if we’re trying to consider whether AI will ever be sentient, and we’re not willing to define sentience as inherently human, how are we going to define it? Because right now it seems like anyone can point to anything and say, “That’s cognition! It’s not human cognition, but it’s cognition!” And who can argue when we’re basically making up a special definition of cognition for machines?