Would it be ethical to create an AI capable of feeling suffering?

Perhaps humanity’s only recourse to self-destruction is to create a superior replacement. If so, the last thing we want to do is to give that creation emotions, otherwise we’ll be in the same boat that we are now.

That thought leads me to wonder if the only super civilizations out there are actually machine races.

It’s entirely possible to emulate emotions without feeling them - humans do it all the time, with the intent to deceive and manipulate. A machine that does that is problematic and dangerous, and falling for its lies is problematic and dangerous. This remains true whether or not the machine has true emotions beyond the ones they’re faking, and it remains true whether or not we’re willing accept that machines have true emotions.

That’s the only danger of “anthropomorphizing” machines - when the machines use this ‘weakness’ of humans against us (probably to sell us stuff). Recognizing when a machine is actually experiencing things isn’t dangerous at all - or rather it’s only dangerous in one possible sense: there are ethical issues with using emotional machines as tools. People like using machines as tools, so they will deny the possibility for as long as possible.

Of course not - which supports my position, not yours.

People ignore doctors all the time. I certainly ignore mine. If the hardware alerts intended to keep the robot from destroying themselves are easily ignored, then robots will be destroying themselves left and right, because they won’t prioritize reaction to the alert signal. Instead they’ll just keep chasing the shiny thing and destroy themselves.

Self-learning will inevitably increase the intensity of the alert to ‘pain’ levels, in direct proportion to the importance of the alert to the survival of the AI. Things that will kill it? Yeah, that shit’s gonna hurt.

Let’s talk about the hardware of pain for a second. Something happens at the damage site, for example a burn - heat sufficient to damage tissue. This triggers the delivery of electrochemical signals to the brain, via wires called “nerves”. The burn itself is not delivered to the brain; a trail of fire doesn’t blast its way up the body, bursting it into flame. It’s just an encoded signal, and quite literally nothing more.

Upon arriving at the brain, this signal is decoded and triggers a sensation of pain indicative of the location, type and intensity of the damage. Somewhat. In actual fact sometimes location the signal indicates is misinterpreted slightly - pain might be felt in the wrong tooth. Or the type of pain - people feel hot when they’re actually cold, or vices versa. Or sometimes the pain signal isn’t decoded at all - once I had this awful intense untreated toothache, and at one point for a few days the pain just…stopped. And then suddenly restarted up again later. And trust me, the tooth didn’t just get better for the time in the middle.

Pain is just a signal that the brain interprets. Calling it a biological thing is bullshit; it’s way too much data far too cleanly encoded and transported to be a bunch of pain acid flowing through to burn the mind directly. It’s just encoded data - an input signal.

So why does it hurt so much?

Well, we evolved that way. Hurting this much aided in our survival. Which seems odd; humans can be incapacitated by pain. How is that helpful?

Pure speculation: Pain level 5 hurts as much as it does because if it didn’t we’d let our bodies go to rot. Diabetics who lose their feet? It’s not that diabetes makes your feet fall off. It’s that diabetes numbs your extremities, so when small things happen to your feet you don’t notice, and infection sets in, and bye-bye foot. People feel small pains sharply because if they didn’t they’d ignore them and keep pursuing the shiny thing. Pain level 10, on the other hand - well, if pain level 5 has us hopping, pain level 10 has us screaming. It could just be a literal overloading of the system - it’s not designed to ‘cap’ off pain, so when it comes flowing in it overrides everything.

AIs may or may not have error alerts that can scale from 5 to 10, and they may or may not evolve ‘caps’ that make “My robot arm is creaking and needs repair” and “my robot arm was torn off and needs repair” seem no more impactful from the other. But if they have escalating alert inputs and don’t evolve that sort of ‘impactfulness cap’, then I see pain in their future.

Hopefully my use of the term “computation” isn’t causing confusion. I’m not familiar with all of the terms philosophers use for topics about brain/mind, so I might be using terms I think are synonymous that really aren’t. In my mind, “functional” and “computational” are synonymous, but googling a bit on CTM tells me that I’m probably misusing one or both terms.

So, here’s how it all seems to me, you apply whichever terms are appropriate:
1 - Our brain biology has a network of computing units that are universal function approximators

2 - We continuously perform functions/transformations/mappings on input (external and internal) to produce output that increases chances of survival

3 - Due to the nature of our infrastructure, a very common type of function that we are very effective at performing and that can be applied to many types of problems is pattern matching

4 - Some problems benefit from tools that aren’t just pattern matching (e.g. logic). We seem to have the ability to solve these other types of problems also, and my assumption is that these capabilities are built on top of the same underlying infrastructure that is essentially universal function approximators
So, to me, it seems that things like object recognition, emotions and higher level logical analysis are all built on the same foundation. Meaning that emotions shouldn’t be singled out as something special and uncomputable any more than object recognition or any other function our system performs.

Caveat:
The above really refers to the functional calculation of the emotional state - not the conscious interpretation/feeling of that resulting state.
Obviously, the conscious feeling of the state is the problem nobody has any clue about (as Mijin was pointing out)

Thank you for answering.
But no, it does not support your position.

You were saying that physical pain is just an input signal and the unpleasantness of it comes from the interpretation that it represents a serious threat.

This is incorrect. It’s a subjective experience, like “redness” or “salty”.

If you tell someone who is experiencing a cluster headache that the condition itself is harmless, that doesn’t make the agony any less.
And, conversely, if I learn a painless condition is actually life-threatening then sure, I’ll certainly be in a bad emotional state, but not one I’d associate with the experience of physical pain.

Again, nociception is the detection and transmission of signals along the nervous system.
Pain is the subjective experience part.

Imagine you drop a bowling ball on your foot. We can describe 3 phenomena:

  1. The detection of damage to your foot and relaying that information to your brain (nociception)
  2. The unpleasant subjective experience (pain)
  3. The behavioural response; hopping around vowing never to go bowling again (response)

We understand (1) very well indeed.
We understand a great deal about (3), though with plenty more to learn.
But we don’t understand the mechanism behind (2) at all yet.

We can trivially make machines that detect sensations and respond to them in some way. But right now, no-one is claiming to have done (2) at any level.

The question of why pain exists and why some things hurt more than others is actually pretty trivial to explain.

It’s like how we can understand why our vision peaks in the green part of the spectrum and why, to us, the tiny sliver of the EM spectrum that we see appears to be very high-contrast, with light of wavelength 400nm looking nothing like light at wavelength 450nm.
That doesn’t mean we understand the physical correlates behind the conscious experience of color.

You assert quite a lot about something we don’t understand at all.

How would the behavior of a machine that could feel pain differ from one that couldn’t?

A not-so-small nitpick: the computational theory of mind is not just a term that philosophers use, it’s a term that many empirical practitioners of mainstream cognitive science use as well. And it does have a precise meaning that I’ll try to clarify below.

The problem here is that you’re using terms like “computing units” and “universal function approximators” that are so vague and ill-defined that they’re essentially meaningless, and so one can get led down a rosy but misleading path by making arbitrary but meaningless assumptions about what those things are supposed to mean. Whereas “computational” in CTM has a rigorously defined meaning. It means that our mental models are representational in exactly the same sense that a computer stores its information in symbols, and which execute synctatic operations on those symbols through procedures that embody them with semantic meaning. The appropriate model to think of here is the Turing machine. To take an example from the still somewhat controversial area of how we process mental images, for instance, it’s the difference between whether the “mind’s eye” that sees a mental image involves the visual cortex or operates at an abstract representational level, as a computer would. There is substantial experimental evidence for the latter.

One of the most important implications of CTM in a context like this is that the cognitive systems it describes, by virtue of being computational, are subject to multiple realizability. The “multiple realizability” concept is the powerful implication that any such cognitive processes can be emulated by a Turing-equivalent machine, and thus a suitable digital computer.

But it’s far from obvious that this applies to all aspects of the mind, and indeed, as per the Fodor quote above, it probably does NOT apply to a great many of our mental processes.
It makes perfect sense for you to say “it seems that things like object recognition … and higher level logical analysis are all built on the same foundation”, as indeed they most probably are. But it’s inexplicable that you threw “emotions” into the middle of that. This is where I think your wishy-washy definition of “computing units” failed you. There is real empirical evidence for object recognition and “higher level logical analysis” being computational in the well-defined Turing sense, but none whatsoever to support the idea of arbitrarily throwing “emotions” in there, which is a hugely different and entirely unrelated set of mental phenomena. “No evidence whatsoever” is not the same as “impossible”, but I’m surprised at the certitude with which some are endorsing the alleged straightforward inevitability of something that we know so little about.

“Computing units”:
I was making the assumption that I could use that kind of term to avoid going on a tangent about the computational and signalling capabilities of the various components of our system (e.g. neurons, synapses, dendrites, glia, microglia, etc.).
“universal function approximators”:
Multi-layer neural networks are proven to be “universal function approximators” and the term is commonly used. My assumption is that the network of neurons in our brain can be assumed to have, at minimum, the same capabilities as an artifical neural network.
Summary:
The basic point (that I thought wasn’t really something you would debate) was that it seems pretty safe to assume our brains have at least the capabilities of the artifical neural networks that math proofs have shown to be able to compute any function.

Do you disagree with this?

When one neuron inhibits it’s neighbor during visual edge detection, would you consider that activity to be related to “representational” and “symbolic” processing?

To me it’s just a function (input is X so output is Y) and any portion of our thinking that is “representational” or “symbolic” is built on top of that type of functional capability.

You seem hung up on “computing units” - I could have just said neurons but that ignores some additional capabilities and complexity (e.g. dendrite localized computation and non-linear transformations prior to forwarding the signal etc.). I am hopeful I’m in a conversation where shorthand isn’t considered the same as being incorrect, it’s just efficient and allows for a quicker good-faith exchange of ideas.

You use the term computational “in the well-defined Turing sense”, but my usage of it is more in agreement with neuroscientist articles discussing things like the “computational” capabilities of neurons, dendrites, etc.

So maybe our primary disagreement is just that term. I thought my explanation with my 4 numbered points might clarify my position but it seems like it didn’t.
Regarding including emotions in that list:
1 - The Amygdala (and other structures/circuits) are a key part of computing the emotional states

2 - When Amygdala is disrupted (damage, tumor, pressure, etc.) the computation of emotional states is altered or stopped completely

3 - When neurons in Amygdala are activated via chemicals or light, emotional states appear to be generated based on observed behavior
Based on the research and evidence, it seems safe to think of them as the result of a functional process in the same way that object recognition is the result of a functional process.

No, I summarized the current situation mentioning that some parts we don’t understand very well.
Anyway, your response is very short and I’m wondering whether you understand and agree about the other things I wrote. For example, on the distinction between nociception and pain?

That’s a good question. It’s often phrased in the opposite way: is it possible for an entity to behave the same as a human without having that “middle layer” of subjective experience (a “p-zombie”)?
The answer is (you guessed it…): we don’t know.

In fact, arguably the main difference between the various hypotheses of consciousness; behaviourism, functionalism, emergentism etc is how they answer this central question. And the fact these are still competing hypotheses shows the question is not settled.

No, I don’t disagree. But as will hopefully be made clear in the rest of my response, to say that we can build brain-like structures that function in brain-like ways to some very limited degree is a kind of truism, and does not address the fundamental question about the computational paradigm and therefore does nothing to advance your argument. The premise that cognition is computational has a great deal of merit, but the important meaning of this concept has absolutely nothing to do with neural switching or synaptic signaling.

I guess I wasn’t clear in my attempt to explain what we mean by “computational” in the context of CTM. The question is widely missing the point for two reasons. One, because you’re describing a piece of hardware, whereas the most basic premise of computational theory is that it’s an abstraction with multiple realizability – that is, it’s completely independent of implementation. Two, because the functionality is far below the level of the CTM abstraction of computation. Furthermore, neural nets can be built with all kinds of different characteristics – they can be deterministic, or just operate on probabilities, they can be analog or digital, or vary in all kinds of fundamental ways.

Such a question makes equally little sense even when applied to digital switches which can be used to build logic gates and real digital computers. Does a relay – or a single switching node in silicon – perform computation? It’s a silly question, because you haven’t achieved computation in the sense of CTM until you have the basic functionality of procedures (stored-program algorithms) operating on abstract representations of the world stored in memory and producing output – until you have, IOW, a limited Turing machine (“limited” in the sense of having finite memory). Even an analog “computer” such as existed in the 50s and early 60s doesn’t qualify, because it solves problems by approximating phenomena on analog voltages and not by symbol-processing algorithms.

You’re right that we’re using “computational” in quite different ways. It’s not that one is “right” and the other “wrong”, it’s that the Turing sense in which it’s used in CTM (I invite you to explore the link I provided above on Turing machines) has critical bearing on our understanding of how higher-level cognition really works, and on its instantiation in digital computers.

Neural networks of certain types are indeed often described as “computational”, but one has to be careful with this kind of terminology as it has a much looser meaning than it does in CTM, which is thus sometimes also referred to as the classical computational theory of mind, or CCTM. And while it’s theoretically possible to build a Turing-equivalent machine with a neural net of the right kind, that’s not a useful observation when it in fact is not possible with most kinds – perhaps the kinds responsible for many of our cognitive functions. And this means that achieving multiple realizability (specifically the realizability of human cognition in digital computers) may be an elusive goal for those capabilities and behaviors that don’t conform to classical computational models, especially those, like emotions, with deep instinctive and physiological connections.

I find it really odd that you mention the amygdala as support for your argument when I would have thought it was just the opposite. The amygdalae are in fact a perfect example of what I was referring to as a unique neurophysiological basis of emotion – specialized processing centers for (among other things) certain kinds of emotions, notably fear, anxiety, and aggression, including the fight-or-flight response. There have been cases of patients with certain kinds of lesion damage in the amygdala who have normal cognitive function but dramatically altered emotional responses, such as complete lack of fear; others may exhibit mood disorders or irrational anxiety. These people may be perfectly normal in every respect but their emotions are completely out of whack (or non-existent). What does that tell you about the credibility of the claim that emotions are just a natural consequence of intelligence, and that all intelligent agents – biological or AI – must eventually have them?

That’s not the only example, either. Another unique connection between emotions and brain physiology is a specialized type of brain cell called a spindle neuron, whose function appears to be to provide deep, high-bandwidth interconnections between distant areas of the brain. Spindle neurons are present only in humans and a few other highly evolved intelligent mammals, and are directly associated with a variety of intense emotions and complex emotional processing. A few quotes on the subject from Wikipedia:
In 1999, Professor John Allman, a neuroscientist, and colleagues at the California Institute of Technology first published a report on spindle neurons found in the anterior cingulate cortex (ACC) of hominids, but not in any other species … Allman and his colleagues have delved beyond the level of brain infrastructure to investigate how spindle neurons function at the superstructural level, focusing on their role as ‘air traffic controllers’ for emotions. Allman’s team proposes that spindle neurons help channel neural signals from deep within the cortex to relatively distant parts of the brain.

In humans, intense emotion activates the anterior cingulate cortex, as it relays neural signals transmitted from the amygdala (a primary processing center for emotions) to the frontal cortex, perhaps by functioning as a sort of lens to focus the complex texture of neural signal interference patterns … During difficult tasks, or when experiencing intense love, anger, or lust, activation of the ACC increases. In brain imaging studies, the ACC has specifically been found to be active when mothers hear infants cry, underscoring its role in affording a heightened degree of social sensitivity.

These kinds of specialized, discrete neurophysiological mechanisms to support emotions seem to me to pose great difficulties to attempts to argue that emotions are nothing more than emergent properties of computational intelligence that are subject to multiple realizability on different computational platforms.

I understand the distinction and don’t really consider it to be in dispute - nociception analogizes to the electrical signal the robot arm sends when it’s not operating properly/at all, and pain analogizes to how the robot reacts to the information within its decision-making process and how the nociception signals compel the robot to stop whatever else it was doing and deal with them.

AIs that are not compelled to give attention to such signals within their decision-making processes will, of course, die. (Presuming there are perils in the AIs environment at all, which is of course not true for all AIs.)

I’ve been wondering when philosophical zombies would be explicitly brought up. If you hadn’t just done so I was planning to do so this morning.

On the subject of ‘sufficiently complex/capable’ AIs (for example ones that can pass the Turing test), the position that these entities don’t have emotions or things that directly analogize to emotions is that the AIs are philosophical zombies.

This discussion is probably not helped by the fact I don’t believe in philosophical zombies. I believe they are an incoherent concept, for the same reason that I think that anything exhibiting intelligence and analytical skills must have some sort of mechanism driving these behaviors.

I think you were clear, but please remember, you introduced the concept of CTM and you seem focused on that definition and turing machines. And each of your responses seems to head back in that direction.
My position is that emotions, like object recognition are functional/computational states. Our collection of neurons computed them.

My impression of your previous argument was that emotions could not be “computed” (using neuro-scientist version of the word), and in addition, you seem to make a distinction between emotions and object recognition, even though OR has been shown to happen prior to higher level thought.
Questions:
Do you agree that the Amygdala and supporting/related structures “compute” (using neuro-scientist version of the word) emotional states by taking input, transforming it and producing output?

Did I read you correctly when you said emotions are different and could not be “computed”? Maybe what you really meant was that emotions are not generally determined based on symbol processing?
If I am misunderstanding your position, maybe you could restate it to help me understand?

If a neuroscientist wants to use the word “computation” to mean “a thing that the brain’s neural networks do”, then they are obviously and trivially correct but not saying anything genuinely useful. It’s never been the position of CTM theory in cognitive science that the brain is literally a computer, but rather that some important cognitive processes can be abstracted as Turing-like operations on symbolic mental representations and therefore instantiated on Turing-equivalent platforms. This is an incredibly important and insightful concept because it argues that nothing about human intelligence is dependent on its biological substrate. No such theory has ever been advanced about emotions (at least, AFAIK), and as with the amygdala and spindle neurons, it’s plausible that emotions are intrinsic to the architecture of the biological substrate.

Saying “it’s all the same because neurons!” isn’t a useful functional model, and seems completely dismissive of the very valuable analytical distinctions that have been developed in the CTM. Imagine, as a trivial example, that you have a computer running an AI exhibiting general human-like intelligence, which happens to be equipped with visual sensors and voice synthesis driven by an algorithm that causes it to yell at you if you get too close to it. Should one conclude from this that yelling at people who get close is an intrinsic survival trait of any intelligent machine, or is it in fact an adjunct that is independent of its intelligence and has little or nothing to do with it? That the brain has specialized and apparently independent mechanisms for processing emotions seems to me to be in some sense comparable to that sort of adjunct.

These divisions seem to exist throughout the brain. There is, for example, a fuzzy but definite delineation between cognitive and affective functionality in the anterior cingulate cortex (the ACC, mentioned previously). The affective area that deals with emotional tasks has a high concentration of spindle cells, while the cognitive area has very few.

Yes, that’s what I meant, but what you’ve called “symbol processing” is a profoundly important and foundational concept in AI, as implied above.

A side interjection : RaftPeople, wolfpup pitted me for many posts over this issue. Like you, I don’t make any distinction between symbols or meanings, it’s all just electrical signalling spikes and cellular machinery obeying fixed rules. With that cellular machinery obeying rules that can be modeled accurately enough (well above the brain’s noise floor) with a digital computer that happens to be a Turing machine.

I think the current science supports my position overwhelmingly, but he disagrees.

You know that a GPU and a CPU are both digital computers that are technically Turing machines. Yet if you look at their internal layouts, they are radically different. Even common elements (something called an ALU) are specialized for each chip’s different function.

But it’s all silicon and copper in the end. And you could build an apparatus to mimic either one. With the brain, the fact that different functional regions use different architectures means what to you?

I almost think you’re stuck on the fact that a modern computer has a large memory, and a small functional processing unit that does all the work. The processing unit can take many roles as it reads instructions and data.

But that’s not the only way to build a computer. You absolutely can grab a soldering iron and a drawer full of what are discrete logic components and build a single function computer. It might take in data from sensors on one side and output actions on another. It can do nothing else, the architecture you used - and all the solder holding the chips down - make it capable of just one task. “Symbols” and “representations” are meaningless. You have a voltage coming in and a voltage coming out. Internally, this digital system might represent those voltages as unlabeled numbers that then get crunched through some equation to produce the output numbers.

The human brains seems to primarily be just a bunch of single function computers crammed into a small space, with the additional feature of being able to make small adjustments to their actual circuit layout as needed.

A “Turing machine” can emulate this circuit board full of discrete logic parts exactly. Even though internally it’s just an infinitely long paper tape and a crude symbol reader.

The analog nature of the brain’s computers makes them trickier to emulate but various signal processing theories suggest is is in fact feasible, because the brain’s high noise levels to low voltage signals mean that the effective resolution of each calculation is finite and thus can be mimicked with a discrete digital equivalent.

The context of that particular argument was how the mind works, and your claim that signaling between neurons = “computation”. My point was that this was absurd in the context of the actual science that seeks to understand how the mind works.

I’m not sure what debating points you’re looking to score here. CPUs and modern GPUs are both limited Turing machines, so from the broad standpoint of computation they are equivalent, differing only in performance optimizations: technical details like the degree of parallelism, vastly different number of processing cores, and interface bandwidth. There are in fact general-purpose APIs to support GPUs in compute-intensive applications. In the architecture of the mind, the point is that many of the functions may not be Turing-computational at all.

You’ve said this many times. How. In order for a function to not be Turing computable, the underlying hardware must perform an operation that a Turing machine cannot mimic to sufficient precision.

We’ve had this debate and the argument I use is reductionist. Ignoring what a single neuron does initially, if you can prove that a Turing machine can produce signals to the same precision (relative to noise) as the spikes neurons use to communicate, you have some indication that, well, you’re wrong.

Then the next thing would be to say “do these “non Turing computable” functions need more than 1 neuron to compute”? If the answer is yes, game over. Your argument’s done.

The next thing would be a careful examination of the possible inputs and outputs of a neuron and to determine if you can emulate it accurately or not. That’s somewhat bleeding edge, leaving a tiny amount of room for your argument.

Another argument would be to look at the physics and cellular parts. Which components are mediating this “non Turing-computable” feature.

Right - his argument seems to be hinging on the theory that the brain does something that cannot possibly be simulated by any calculation method, even a theoretical one that maps out behavior at the level of particles and physics.

Almost makes a guy wonder if he’s arguing for magic.

Well. Quantum calculations aren’t magic, per say, but they can eat more computing time than a digital computer has to emulate. At least, in theory, nobody has gotten a quantum computer to work well enough to be totally sure this is feasible. And it’s in extreme environments - nearly absolute zero, using very carefully made apparatus that minimizes outside noise. Seems kind of unlikely some neuron running in an electrically noisy environment can do the same thing.

But even if it can, this is why the “signals” argument is persuasive. If all a neuron does is send and receive spikes to relatively low timing precision, I bet you can emulate the algorithm well enough with a digital computer to be indistiguishable. If the “quantum algorithm” has a few microseconds of random jitter from various small flaws in brain wiring, as long as your digital equivalent is close enough to within the bounds of the jitter, it’s calculating the same functional result. Even if it got the answer a totally different way.

You misunderstand. You completely and totally misunderstand. First of all, please read the link I posted on the Computational Theory of Mind in the Stanford Encyclopedia of Philosophy which, despite its name, is an excellent practical resource on a wide range of topics in AI, cognition, linguistics, and many other areas. Try to understand why the concept of Turing computation is so important. You still seem to think that signaling = computation, leading me to believe you’ve never read any of the cites.

Second, think again about the part of the quote from Jerry Fodor that I’ve mentioned several times, one of the modern pioneers of cognitive science, and think about what it means:
… it hadn’t occurred to me that anyone could suppose that [the computational theory of mind is] a very large part of the truth; still less that it’s within miles of being the whole story about how the mind works … I certainly don’t suppose that it could comprise more than a fragment of a full and satisfactory cognitive psychology …

Lastly, and perhaps most significantly here, is the inexplicable mutation of the discussion into the imaginary allegation that no mental process can be mimicked on a digital computer. I’ve never made such a ludicrous claim here or anywhere. I just recently described (above) how an analog computer is in no way a Turing machine, yet surely any digital computer can emulate the problem-solving capabilities of an analog computer to any necessary degree of precision. This is a trivial point that no one disputes. AI systems have already been built that can discern human emotions, and systems could certainly be built that can mimic them.

The distinction between those brain functions responsible for cognition and intelligence and those responsible for emotions is important in addressing the question of whether emotions are (1) necessarily an emergent property of intelligence, (2) whether AI systems must therefore be presumed to develop them, and (3) whether such anthropomorphizing is even meaningful in the context of non-biological systems in the sense that we need to feel empathy for them. My position on this is (1) no, (2) almost certainly no, and (3) probably no, in decreasing order of certainty. I’ve never seen any evidence to the contrary.

Whether emotions are necessarily an emergent property of intelligence sounds like it would be heavily, if not entirely, dependent on your definitions of the words “intelligence” and “emotions”.

It’s quite easy to develop a coherent, consistent, and dictionary-compatible definition of “emotion” that makes it ludicrously easy to accurately attribute emotion to damn near anything that responds to input. Some people here object to using such liberal definitions, of course, but they haven’t done a very good job of making a coherent, consistent, and dictionary-compatible definition of emotion that excludes such mechanisms.

Absent agreed-upon definitions, of course, this discussion is going to (continue to) chase its tail indefinitely.