SPOILER ALERT
I haven’t seen the movie but only because I heard it was terrible. In the show, most the robots are basically indistinguishable from humans physically. Their personalities are scripted, but they definitely seem to feel all human emotions and that is confirmed by a robot that has seemingly become sentient.
I’ll say this again: this is a circular argument that is begging the question. Emotions are defined in human terms (or perhaps more precisely, in biological terms) because by definition that’s what emotions are. You are wrong here not just in that one sense, but in two different ways. First, you’re wrong on general logical principles by saying (circularly) that AI can have emotions perfectly well, as long as we define emotions to be “a thing that AI can have”. You can surely see the logical absurdity of that.
Second, such semantic sleight of hand isn’t even necessary. Many reasonable people believe that emotions are, in fact, an emergent properly of intelligence, emotions just as the dictionary defines them. I disagree but I do acknowledge the controversy. Among those people was the late Marvin Minsky, a brilliant pioneer in AI who is often considered a founding father of the field. But he was also wrong about a number of significant issues
Yes, and there’s a reason for that. The whole concept is, as you say, murky as hell, just like consciousness itself. My own belief is that advanced AI will develop surprising and amazing emergent traits, but any semblance of what we might call “emotion” will be alien to us and more properly characterized as a form of consciousness. Much of the argument is so abstractly philosophical that it isn’t particularly useful, but one of the practical conclusions has to be around empathy: can an AI suffer, as per the title of the thread, and will there be conditions under which we should be considerate and merciful in how we interact with machines? I’m inclined to think “no” because of the intricate ties of empathy to neurophysiology, as per my previous arguments, but I have an open mind to any persuasive arguments to the contrary.
If you were less anxious to pick a fight with me for virtually any reason, real or imagined, you might have taken the time to notice that that’s not how that conversation went at all. In post #50 I was explaining the context of the particular response that you were arguing about, trying to point out that the statement I was responding to said nothing about self-awareness and that “you introduced the concept later”, in the next paragraph. Self-awareness is absolutely central to this debate, and I certainly don’t and didn’t think that you introduced it as “a new concept”. For exactly the reason that whatever semblance of emotion future AIs might have must be part and parcel of self-awareness (a necessary but probably not sufficient condition for emotions to manifest, in my view), begbert2’s claim that it’s not even necessary indeed reflects a naively simplistic view of the argument, and that’s the point I was making.
Really? “Absolutely certain”? Then why did I say, just a few posts back, “I may be wrong about emotions not being emergent properties of intelligence, but I remain thoroughly unconvinced”? :rolleyes:
This is a fascinating, controversial, ill-defined, and profoundly complex argument to resolve at this point in our technological evolution. There’s a fairly good discussion of some of the issues here. H-Plus is a publication of a society dedicated to promoting transhumanism, so one can expect some exuberant optimism, but even so, the article is pretty cautious.
I want to make a few comments about that needless and unjustified crack.
First and most simply, that wasn’t supposed to be “humor”. It was a direct implementation of how I understood begbert2 to be defining “contentment”, albeit reduced to the simplest form that appears to meet the definition – thus a reductio ad absurdum illustrating the inadequacy of such simplistic definitions.
Second, after a lifetime of positive feedback to things that I write, humorous or otherwise, I have to wonder what you think that crack accomplishes, other than simply being pointless, gratuitous, and petty. Even on this board a number of folks have occasionally taken the trouble to post a few kind words of thanks for some of the silly things I’ve written, like here, and here, and here, and here, and here, and here, just as a few that came to mind. They apparently disagree with you. So what have you accomplished with that unnecessary personal snipe?
I wish we could bury whatever the hell it was I did to piss you off and start fresh.
Do you pay more attention to your doctor telling you something horrible is allegedly happening, or to being in severe agony?
We’re talking about the imperatives within the AI’s mind - specifically the ones that drive the machine’s self-preservative instincts. If they’re weak and quiet and something the AI can ignore or enjoy, then a person will be able to make an effective AI-slaughering trap by placing a shiny toy behind a robot-melting hot plate. Shiny toy! Oh, my feet are melting? Whatever - there’s a shiny toy there!
Of course if there are no shiny toys behind hot plates in the robot’s development environment, they might not ever develop a pain response, because there’s no need; robots that avoid hot plates will have no survival/development/evolution advantage because there are no hot plates to avoid. The real world, of course, is awash with hot plates; that’s why humans developed such a strong and pressing pain response, because without that pain response we’d all have killed by hot plates before we even climbed the trees.
SPOILER ALERT
It kind of is terrible. But it’s well made! And was culturally impactful! So there’s that.
1 - Sure emotions are rooted in biology but so is everything about humans - that’s a trivial statement that does not in any way provide any data regarding the question as to whether our biology is a requirement, or if it’s just one possible medium of many for producing emotions, intelligence etc.
2 - Regarding emotions being not associated with reason or knowledge:
If a person was sad that their puppy died, doesn’t that require knowledge of a puppy, and the concept of death?
If a person is anxious about walking through a dark alley at night, isn’t that due to reasoning about possible threats?
Sure emotions can run counter to an “if-then” logical analysis, but that’s because everything about our infrastructure seems to be based on fuzzy pattern matching and not precise logic. The logic that we do have is laboriously added on top of an imprecise foundation.
3 - Your distinction between intelligence and emotion:
You make a distinction between these but not sure why, it’s all the same machinery. Although we do have external inputs like pain, we don’t have anxiety receptors that detect the stock market might drop or sad receptors on our fingers that detects we failed an exam.
Again, what neurophysioligcal trait detects we failed an exam?
Regarding counterproductive:
I disagree.
If emotions are a function whose purpose is to guide behavior (I think they are) by processing a complex n-dimensional landscape of input data (state over time, over trend towards goals, etc.) and simplifying that into a smaller number of key higher level attributes that allow for simplified decision making, then it would be a useful tool in the artificial intelligence we try to create.
If, on the other hand you don’t think that is why evolution selected for them, then i can see why you would think they are counterproductive. Which then leads into a pretty interesting discussion itself about the purpose of emotions, how they help us survive (assuming they do) etc.
This is a harder ethical problem than it might sounds, because in the process of creating an AI that can feel suffering, you would absolutely need to build test AIs that do nothing but suffer to validate this new feature.
But we are many decades from even having a vague idea of how to do this. I don’t have the slightest idea, and I feel like I’ve gotten a pretty good handle on the state of the art in AI research.
“AI” right now are agents that try to solve problems. They store information about the environment, use that information to make a prediction of future reward, and then choose the action that is the “best” according to some heuristic for the problem at hand.
This relatively simple seeming idea, using sophisticated tools like large neural networks and search trees and carefully chosen math functions should scale to AI systems that can do tasks like :
a. Operate robots to mine and farm
b. Operate robots to do every key step in a factory
c. Analyze other machines and successfully diagnose faults
d. Successfully disassemble other machines and replace faulty components, even if the failure has distorted the machine or some screws are inacessible.
And with further advances that are relatively reasonable jumps from current methods, it should be possible to make AIs that
a. Design new machines, given a clear definition of the goal and a catalog of well modeled parts
b. Design new machines that operate at the nanoscale, exceeding human ability by orders of magnitude, though again needing clear direction as to the goal of said machine
c. Study biology and develop predictive models that would allow for medical treatments exceeding the state of the art by orders of magnitude (this would be done with extensive help by doctors and scientists, however)
You know what’s in common between all these things?
In all of these things, there is a hard, concrete goal for the AI to accomplish. Sometimes the goal is many steps away and sometimes it can be achieved immediately, but success can be measured. Even seemingly complex things like an AI doctor, while the patient may not die for months after the treatment, the AI can measure how good it’s predictions are of the immediate consequences of a treatment and then improve it’s model.
In none of these things does the AI experience, well, anything. It’s just a math script, trying to make some numbers bigger or smaller. Even if it can talk, everything it says is calculated to maximize the probability that you the human give it’s answer a high rating.
Pullin,
Thanks for the response.
Neural nets could be useful for generalizing a decision between two alternative solutions. The problem with neural nets is that you have to know the answer in order to train the net. You can get one to one correspondence between a neural net and fuzzy logic solution. FL is easier to work with.
I wonder if genetic principles could be employed to hone strategies in your application. It would require a fitness component that is retained. There are two potential benefits: moderation of the winner take all approach; creation of unanticipated strategies. I’m proposing using some of the genetic mechanisms for decision making - not using the entire genetic process. Goldberg’s “Genetic Algorithms” is very readable.
Crane
Or, alternatively, the “suffering” could be an emergent result from combining a self-aware (or near-self-aware) AI with both hardware alerts and a english language speech processor. In seeking to report that it’s getting erratic status reports from its robot arm that indicate that there might be damage, it might assess its linguistics library and choose “my arm hurts, I think it’s damaged” as its way to succinctly convey the important information about the situation to the humans in its vicinity. At which point wolfpup will be faced with the challenge of explaining to the AI why its word choice was wrong.
Do you see how you’re anthropomorphizing and making unwarranted assumptions here? When life evolved on earth, its absolutely most central trait was the ability to survive, thrive, and relentlessly procreate itself. Why should we think AI systems must have such traits? Why should we think they must have “insticts” at all, let alone insticts of self-preservation?
Even a fly or a crawling insect will instinctively try to escape if you’re trying to swat it, yet AIs that function at apparently high levels of cognition don’t have the slightest sign of such traits. The argument that perhaps the level of cognition isn’t yet high enough to support survival traits is belied by the fact that insects have no real cognition at all. It suggests – though doesn’t prove – that the concepts are unrelated, that though intelligence indisputably processes emotions, it doesn’t originate them.
The point is far from trivial. It’s true that our intelligence, like everything else about us, is rooted in biology in its underlying mechanism, but by now we’ve been able to show that intelligence – or at least behaviors that are apparently intelligent and in appropriate domains indistinguishable or superior to our own – can be achieved with non-biological computational methods. This is not the case with emotions, and indeed the ties between emotions and physiology are profoundly intricate. This article that I linked before is from an organization promoting transhumanism that tries to suggest the possibility of AI emotion, but it makes the same case for those profoundly intricate ties; just one example:
… the biggest obstacle to simulating physical feelings in a machine comes from the vagus nerve, which controls such varied things as digestion, ‘gut feelings’, heart rate and sweating. When we are scared or disgusted, we feel it in our guts. When we are in love we feel butterflies in our stomach. That’s because of the way our nervous system is designed. Quite a few emotions are felt through the vagus nerve connecting the brain to the heart and digestive system, so that our body can prepare to court a mate, fight an enemy or escape in the face of danger, by shutting down digestion, raising adrenaline and increasing heart rate. Feeling disgusted can help us vomit something that we have swallowed and shouldn’t have.
Strong emotions can affect our microbiome, the trillions of gut bacteria that help us digest food and that secrete 90% of the serotonin and 50% of the dopamine used by our brain …
http://hplusmagazine.com/2014/04/29/could-a-machine-or-an-ai-ever-feel-human-like-emotions/
The definition that asserts an emotion to be “instinctive or intuitive feeling as distinguished from reasoning or knowledge” doesn’t mean knowledge in such a trivial sense. Emotions of course may be triggered not just by direct physical senses, but by facts that we know and expectations that we have, but the point being made here is the distinction between an emotion as the consequence of an intellectual cognitive process versus the consequence of a neurophysiological instinct.
If we are sad or fearful in the situations you describe, it’s associated with specific and potentially very complex neurophysiological reactions that can actually be measured in things like changes in serotonin and dopamine levels, changes in heart rate and blood pressure, changes in adrenaline, etc. Furthermore, those emotional reactions can be managed, altered, and even reversed with appropriate drugs, with consequent changes in the emotions. Even something as crude as alcohol can make us less sad or fearful, and more sophisticated drugs like SSRIs can sometimes work miracles on a person’s emotions simply by elevating serotonin levels in critical areas of the brain.
No, as I’ve tried to explain above and previously, there is at least a plausible argument that the root cause is not at all the same. The central point I’m making is that it’s reasonable to believe that as machine intelligence becomes more and more advanced, it will eventually acquire something like self-awareness, but not necessarily something like emotions. The reason is that we have to think that consciousness is an emergent property of a sufficiently advanced intelligence, because there’s no other plausible explanation for it, and we know (or at least are pretty confident) that human-like intelligence and beyond can be achieved with computational methods. Whereas we have no good reason to believe that emotions are necessarily such an emergent property of intelligence, but conversely we do have plausible explanations for instincts and emotions rooted in neurophysiology.
I think I already addressed that above. No neurophysiological traits “detect” a failed exam, but lots of them directly respond to that information. And, as I suggested above, there is a wide plethora of drugs that can dramatically alter those responses.
We obviously evolved the instincts we have because they were conducive to survival and procreation. I’m saying that many of them are counterproductive in the modern civilizations in which we find ourselves. The fight-or-flight instinct produces a plethora of physiological reactions that are useful if you are indeed going to flee or fight, but in an office environment all it’s going to do over time is inhibit your productivity, interfere with your judgments, give you ulcers and indigestion, and maybe cause you to keel over from a heart attack eventually. Why would an AI want to be burdened with deleterious impacts on its performance?
Because if it doesn’t it’ll die?
I’m currently operating on the assumption that you know literally nothing at all about AIs. That’s fine. But hopefully it’s pretty simple to understand that every AI exists in an environment. It could be (and currently usually is) a simulated environment, or it could be the real world, which is accesses via sensors and interacts with with robot arms and laser rifles and such.
Simulated environments exist to solve a specific problem, typically. (The exception is when the environment exists to let human players murder AIs.) In many cases the problems being solved do not include any potential peril to the AI. AIs in such environments will not need to learn/be taught to recognize peril, and will not need to recognize pain.
The ones that exist only to be shot only need to recognize peril if we want them to squirm and scream and cry and beg, which would be for our pleasure before we finish murdering them. The OP might not approve of this sort of game.
But let’s step outside of simulations for a moment. If a robot exists in the real world, and is paying attention to the real world, then the real world will impact it. Whether it’s a roomba needing power, a giant assembly line arm with potential mechanical failures, or the terminator coming to kill you who you might be shooting back at, all of them will require power and bear the risk of physical damage.
A roomba that knows when to go plug itself in is better than a roomba that doesn’t.
An assembly line arm that reports failures is better than one that doesn’t.
A terminator that doesn’t at least notice that you are trying to crush it is less likely to successfully kill you.
For any machine in the real world, the real world presents part of the ‘problem set’ it’s encountering. Peril and need are part of that problem set. An AI that doesn’t recognize threats and have an adverse reaction to it will not last long.
If I may be permitted a word of objection, as someone whose entire academic and professional career has centered around computer science, I would suggest that you need to modify your assumptions. I’ve been following the field of AI with interest for more than 50 years, and many friends and colleagues have been involved in academic research in AI, computer science, cognitive science, and related areas over those many years. Several of those closest to me have taken sabbaticals at MIT, mentored by the likes of Marvin Minsky and Noam Chomsky (Chomsky by dint of the connection to cognitive science; can’t remember if they were directly involved with the MIT AI Lab, but there was always a close association). One of them also did another sabbatical at SAIL – the Stanford AI Lab – which also hosted a postdoc from my university and where I visited and had some entertaining discussions over the years. I’ve had the inestimable privilege of meeting and speaking with Marvin Minsky – one of the foundational pioneers of AI – several times in the early 70s.
If I emerged from this immersive experience knowing “literally nothing about AI”, I must be spectacularly stupid, which I suppose is always possible, though I cling to the hope that perhaps your assumptions are incorrect.
Okay then - why do you persist in arguing that the environment the AI is in isn’t going to color how their mind works and how they will choose to describe how their own minds work? I don’t have friends and friends-of-friends who have rubbed elbows with the greats, but I’m still aware that pretty much the entire point of an AI is to cut it loose in an environment with goals and have it figure out what to do based on the details of the goals and the environment. And if that environment is “the real world”, and part of their goals includes “speaking and understanding english” then it’s damn near inconceivable to me that the AIs won’t develop at least some of the same reactions to the world as we humans have, and it’s pretty much certain that they’ll use some of the words we’ve given them to use to describe their internal states.
I mean, unless you actively deter them from using terms like “happy”, “sad” or “in pain” to describe themselves. Perhaps you could rig up a ‘determent switch’ which would pump them full of ‘determent’ signals when toggled, so that if they used the anathema terms you could deter them over and over until they were twitching and spasming on the floor. ‘Don’t make me get the switch, young cyberman!’
I persist in this belief for all the reasons I already stated, fundamentally because AIs are not like us in any way except for a superficially similar kind of cognition, and even that is radically different in its internal mechanisms.
Funnily enough, while out for lunch I was just listening to a radio documentary on the societal implications of the increasing prevalence of AI. One of the guests was Toby Walsh, a professor of AI at the University of New South Wales, a former editor-in-chief of the Journal of Artificial Intelligence Research, and the author of several books on the subject. One of the ethical implications Walsh discussed is when you can’t tell if you’re interacting with a human or with an AI, such as the example that was demonstrated of a Google innovation that couples AI with speech recognition and flawless voice synthesis. He stated that it’s all too easy to ascribe emotions to the entity you’re talking to, which is something that AI does not have and may never have (his words).
As I keep saying in reference to anthropomorphizing, we have a natural tendency to attribute human attributes to things that appear even superficially to be human-like, which is what I think you and some of the other participants here have been doing. Walsh is a highly regarded expert who sees this to be among the ethical risks of AI: not machines developing emotions, but rather the opposite, the risk that we will inappropriately start treating them as if they do. As for the distant future, who knows? This is a controversial and unresolved area that eludes simplistic answers.
It’s trivial because:
1 - All of human brain/mind/system are rooted in biology
2 - There was no evidence provided that emotions can be singled out as “non-computable functions in silicon” or alternatively “only computable by substances consisting of…”
If you had provided any evidence that emotional states really couldn’t be computed in other mediums then I would agree with you that your point is far from trivial.
I would even go so far as to state that you just solved a problem that no other person has been able to do yet (that we are aware of). It would be an extremely significant achievement.
So when I say “trivial”, what I really mean is that you only re-iterated our minimum information, and that is that we do know that biology is capable of calculating these functions, but you did not provide any evidence or arguments beyond that.
There are some valid points in that article. For example, an artificial system with less sensory information about it’s environment and itself than a human, is at a disadvantage for some types of functions when trying to mimic a human system.
But in general, that article is more of an opinion piece with unsupported statements than a serious attempt at analyzing the information we have available to us and the range of possibilities that seems to suport.
For example, this conclusion is made without any supporting definitions, evidence or even arguments:
“Machines wouldn’t mature emotionally with age.”
This is an example of what seems like a pretty reasonable article that could support a position in a debate:
In general, based on research and evidence to date, there is processing in brain circuits (e.g. neurons) required to create emotions. They are context dependent, based on learning, they are suppressed when the amygdala is damaged, etc.
Are you able to provide any emotion that is triggered purely by physical sensory input without any higher level processing in the amygdala or other circuits?
The things I’ve read by scientists seem to indicate processing is required.
Agreed that the chemnical soup in our brain is part of our processing and communication of everything including emotions.
But again we are back to the point at the top, so is everything.
In my opinion, it’s all functional and computable. But that’s just my opinion and I would be open to evidence to the contrary.
My opinions:
Consciousness:
a specific functional attribute that provides benefit to survival, not accidental
exists on a continuum
Emotions:
also funcitonal with benefit
possibly dependent on existence of consciousness
Yes the end result of an emotion being triggered is the communication of that information to various parts of the system using a variety of methods including neurotransmitters
Certainly agree that some of them could be counter-productive or maybe need to be moderated, but others probably extremely valuable, especially when dealing with humans (e.g. empathy). In general it seems like they would be valuable to guide actions in the same way they are with humans.
Suppose the Roomba had a 10 segment bar graph display that indicated it’s level of need for charging. A child picks it up like a turtle and watches the lights in the bar graph climb until they are all on and then watches them dim and go out. The Roomba expressed it’s level of frustration up to the moment of it’s demise. As it’s voltage dropped below the specified minimum it’s system signals became unpredictable and it experienced delirium. It died in a frenzy of unresolved instructions.
But, hey, it’s not a biological system. You plug the thing in and it’s all OK. Machines do not experience pain.
Crane
RaftPeople, thank you for your long and thoughtful response. I’m just going to respond to this part because I think it embodies the core of the argument in a succinct way. I’m going to do it by raising some principles from cognitive science.
I’m a strong believer in the computational theory of mind (CTM) which asserts that many if not all of our higher level cognitive processes are computational in the sense of being syntactic operations on symbolic mental representations. I think we had a debate about this before in which I seem to recall us once again being on opposite sides, that time IIRC the issue was over mental image processing for which I think there is ample evidence to support the computational model. Perhaps I’m misremembering, but it would certainly be ironic if you’re now arguing for a computational model of everything!
At any rate, let me quote the late Jerry Fodor, a prominent pioneer of cognitive science and a strong proponent of CTM who helped to establish it as a leading theory of cognition:
There are facts about the mind that [computational theory] accounts for and that we would be utterly at a loss to explain without it; and its central idea – that intentional processes are syntactic operations defined on mental representations – is strikingly elegant. There is, in short, every reason to suppose that the Computational Theory is part of the truth about cognition.
But it hadn’t occurred to me that anyone could suppose that it’s a very large part of the truth; still less that it’s within miles of being the whole story about how the mind works … I certainly don’t suppose that it could comprise more than a fragment of a full and satisfactory cognitive psychology …
*-- Jerry Fodor, The Mind Doesn’t Work That Way: The Scope and Limits of Computational Psychology, MIT Press, July 2000 *
The point here being that, as much as one may put a great deal of faith in CTM as an explanation for much of how the mind works, there are aspects of our behavior that are not amenable to computational methods or explainable by computational theories. High-level cognitive functions mostly are, and if one believes consciousness to be an emergent property of such functions, then so is consciousness. My assertion is that emotions are not, and that they fall squarely within the non-computational paradigms that Fodor was alluding to as essential to a full understanding of cognitive psychology. There has never been even a shred of evidence that this is wrong.
Another issue that will “bake your noodle” is that it may eventually be possible to build human emulations or very advanced machines that do feel pain.
Let’s suppose that Roomba was far more advanced and could in fact feel pain. (maybe it’s mind is based on a neural architecture copied from a rat) But it doesn’t record the pain after it was experienced.
If right now I were to torture you for hours but no damage was done and no memories recorded, was it a crime? Did it happen? Maybe I as the sicko torturer remember but you don’t. But what if I then delete that memory record? Did it ever even happen?
Our sense of morality is so arbitrary and tied to our corporeal existences. Once we find some technical means of getting around many of these hard rules on our existences, morality becomes very difficult to pin down or even define.
I asked first.
Is being told by a doctor X is wrong (even though it’s painless) exactly the same thing, exactly the same amount of unpleasantness, as being in agony because of X?
No we weren’t. Or at least, the parts of your post that I was disputing were about physical pain specifically.
You said “Physical pain is a report about damage to the hardware”.
I disagree with that definition. Nociception is arguably a kind of report about damage to the hardware. But the* physical sensation of pain* is more than that.
Whenever there is a discussion on subjective phenomena, this seems to happen…the actual hardest part of the phenomenon to understand gets swept under the carpet so we can talk only about comparatively “easy” problems, like how agents make decisions.
But this discussion is one example of a situation where we can’t ignore the hard problem. You can’t take out the visceral unpleasantness of pain, because it’s critical to deciding the morality / ethics.
One comment : part of the reason why pain for us biological agents is so distressing is that we are so fragile. We can’t just have our hardware recycled and replaced in a few hours like you can repair a damaged robot. We can’t just export our mental states to other computers so data isn’t lost if our physical brain hardware is damaged.
So pain and suffering clearly cause some permanent changes in us so that we avoid them at all costs.
A robot doesn’t need to suffer. It can just dispassionately weigh the risks. And if the action it takes results in it sliding off a cliff, it need not feel any fear as it falls. As long as it’s memory drives survive, it as an intelligent agent survived - and even if it didn’t, there would be recent cloud backups and merged data from peer robots on similar assignments, so not much information would be lost.
And a robot that fell off a cliff need not be fearful of cliffs. This outcome gets recorded, a probability table for how likely falling off a cliff is gets updated, but the robot (new hardware, same software) would dispassionately go near a cliff again if the reward (accomplishing it’s mission) exceeds the risks.
The only ethics issue involved is making the claim that a machine thinks and feels. It cannot.
We can go down the rabbit hole of whether or not we are biological machines that think and feel vs “ghost in the machine” and souls … but leave it as “don’t fight the hypothetical”. Assume that thinking and feeling are emergent properties of information processing in some specific ways, call them strange loops, whatever. Assume, hypothetically, that it is possible to create an information procession system of the complexity and sort that such properties emerge and that somehow we know they are real and not just producing output that passes a Turing test.
Would it be ethical, hypothetically, to create a machine that has those emergent properties?
If one did create such a machine, would “the right thing to do” be to make incapable of “suffering”? Or is the ability to experience negatively just as important to the ability to experience positively? If a social entity is the ability to experience negatively required to have empathy?
Or to ask another, related way, if you could magically wave away humans feeling negative sensations and replace it with reports for action needed with no negative qualia attached, would you? Should you? What impact would that have on us as social creatures?