Would it be ethical to create an AI capable of feeling suffering?

You did in my Pit thread. Over and over, you repeatedly stuck to the claim that if you could somehow obtain an exact wiring topology and synaptic strength/type map of a preserved deceased brain, this would definitely not be enough to emulate the original brain to a “good enough” level of precision. (“good enough” means in context that other humans can’t tell the difference and the emulated brain is capable of completing successfully or learning to complete successfully all the human tasks the old one was)

Now you seem to be saying you didn’t mean it that way.

As an engineer who looks at things from the bottom up (I have to work on very low level systems and the AI work I’ve done is on very low level algorithms) to me these seem like hugely important distinctions.

Also, if an emulation of a brain can be good enough, then emulated brains will experience all the emotions we are arguing about, and we could build an AI by first starting with pieces or at least wiring patterns borrowed from scanned brains. Such an AI could experience emotions, including suffering.

I agree with you that it would by no means be an accident or inevitable, it would be human engineers choosing to build an AI with this capability. Or possibly being forced to - for larger reasons they might need an AI with humanlike volition and so they might do this as a shortcut to getting there.

So, when they write things like the following regarding computing linearly non-separable functions, they are really just saying “things that neurons do” and nothing more:
“Local supra-linear summation of excitatory inputs occurring in pyramidal cell dendrites, the so-called dendritic spikes, results in independent spiking dendritic sub-units, which turn pyramidal neurons into two-layer neural networks capable of computing linearly non-separable functions, such as the exclusive OR.”
Do you disagree that a definition of computation is “the action of mathematical calculation.”?

Would it help you if I used the term “calculation” instead of “computation”?

Great, looks like we’ve got clarity on your position, and hopefully my position, here’s my summary, let me know if you agree with the summarization of our positions:

wolfpup:
Some aspects of our mind are “computed” via CTM like principles, e.g. symbol processing etc.
Other aspects of our mind are not “computed” because we reserve that term, instead they are “calculated” using non-CTM principles

RaftPeople:
All aspects of our minds are “calculated” on a foundation of non-symbolic processing, just functional
Some aspects of our minds are possibly “computed” via CTM like principles - if so, those capabilities are built on top of the calculation/functional layer

My opinion is that nothing in AI is profound yet given the lack of real progress. And when there is something that is profound, the source of it will be math, not philosophy, biology or computer science.

Right.
Or at least, nociception is the signal that gets sent; it doesn’t necessarily need to be as serious as the arm not operating properly.

Wrong.
Pain is the subjective experience.
Do you dispute that (human) minds experience subjective phenomena: colors, smells, sounds etc? Serious question, not rhetorical.

If you agree that the perception “redness” exists, and is not inherently tied to any specific behaviour, then I don’t see why you would have an issue with pain as a subjective experience.
(And, incidentally, pain does not connect automatically with any behaviour. That’s largely the point of it: the conscious agent that experiences pain “feels” it as something powerfully negative, yet, unlike a reflex, it can choose any behaviour including just sitting there and enduring it).

Solvers that kick human butt at limited domain tasks, that are so general you could build a framework and apply to to thousands of problems while using the same core code, aren’t profound?

Most of the solvers I have looked at are path solvers : given a state, and some finite set of available actions, which actions should the agent take that will result in the largest long term expected reward.

But this type of path solver should be able to drive a car better than any human who ever lived, Real Soon Now, sweep floors like a mastro or assemble complex objects in a factory with beautifully optimal movements and the capability to correct for things going wrong. They can already fly quadcopters better than any human pilot.

And these type of path solvers may scale to deeper problems. I think they will. I think you could build a solver that can start with a table full of mechanical parts and actually “invent” a solution to solve a problem by actually devising a way to combine the parts into a working machine that will solve it.

And eventually scale that to an AI engineer that can do the same thing, given a parts catalog and a library of extremely detailed models for each part. And an AI diagnosis agent that can determine what is wrong with a human being better than any doctor who ever lived, if for no other reason that it has the equivalent of millions of years of experience.

And domains humans have no real grasp of, that elude our everyday understanding and are expensive and difficult to manipulate? I think with a bit of guidance from humans, AI solvers could devise and construct nanoscale machinery of greater and greater complexity, until ultimately being able to procedurally design a self replicating factory with hundreds of thousands of separate processes.

Pain would be the imperative - the way the AI’s decision-making software interprets* the nociception and translates it into something that influences the decision-making process, such that it perceives the input to have a quantifiable** and comparable** value which it can compare with its other driving imperatives to choose*** how its going to react to the situation - including both that damage-related nociception and all other goals, inputs, and the rest of the situational state it’s in. Of course the imperative in question would have to be pretty influential** in order to make reacting to it a higher priority than pursuing other goals.

  • the experience
    ** powerfully negative
    *** not like a reflex

Any situationally aware and self-motivated AI is going to have to constantly be juggling various inputs and imperatives that each would suggest a different possible set of ideal outcomes. Where you get things like “hunger” and “pain” is when the AI is expected to, itself, decide when to react to things like stopping and plugging itself in for a recharge or signing itself up for maintenance. These alternate goals will have to be weighed in concert with whatever the AI’s primary goal is (assembling jigsaw puzzles, probably), and at some point the AI is going to have to be given sufficient reason to decide to put the puzzles away and deal with the secondary goals.

In a lesser AI, so to speak, these drives would be hardcoded; power would drop so low, the puzzle-solving process will automatically shut off, and the the robot would immediately go plug itself in. But a self-motivated and situationally aware AI would know that its power is low, but could put off dealing with it until it found a good ‘stopping point’. But for this to happen the low power level would still have to be an imperative that it is aware of - an urge constantly driving it to recharge itself without instantly forcing the action.

Which is to say, hunger. Pain of course would operate on a similar principle, constantly reminding the AI that badness is happening, without completely shutting down processing on other matters. And by “reminding”, I mean putting an active distressing imperative that could (at some point) override its positive rewarding imperative for completing puzzles and make it get off its digital duff and look for a band-aid. It has to be actively demanding attention enough to force the AI to care.

That quote appears to be describing neurons acting as logic gates. Calling that “computing” is a reflection of an unfortunate reality of the English language where the same word has radically different if loosely related meanings in different contexts. It’s like calling the switching of a single logic gate deep within the billions of gates in a processor chip to be “computing”. It’s confusing and unhelpful in a computer science context discussing an algorithm running on that computer, and in the same way it’s confusing and unhelpful in a cognitive science context discussing the high-level functional aspects of cognition.

I would prefer to say that some aspects of how the mind works can be explained in classic computational terms and are empirically consistent with such models, while others are not well understood at all. To say they are “calculated” implies a level of understanding that doesn’t exist. It also implies a particular kind of deterministic arithmetic logical mechanism that may be quite wrong for many of them.

That’s debatable at best. We have more power in our handheld devices in terms of speech recognition and voice synthesis than existed anywhere a few decades ago. Natural language understanding has made leaps and bounds. Game-playing intelligence has made incredible advances. Serious AI is appearing in commercial applications that aren’t always well publicized.

So here is a thought experiment. Let’s say I have a regular human and a humanoid machine. I decide to cut off one arm from each. At what point does the machine feel “suffering” in the same sense of a human?

When its diagnostic systems report that an arm has been damaged?

If it’s programming tells it to emulate the reactions of a human having their armed chopped off? Is that functionally different from an actor (like James Franco) portraying the same thing in a film?

Does the machine experience something akin to an emotional state at the loss of it’s arm? Does it potentially experience a degradation of performance that goes beyond the physical damage it experiences?

Can the machine anticipate the pending damage and take actions to avoid it? Even if those actions may conflict with other previously established tasks or priorities?

But there are ethical obligations to feed and care for the baby.

I have to echo **SamuelA **in that the potential for AI is profound. Currently, the state of AI is merely an impressive ability to crunch large-scale populations of data or respond to simple voice commands.

I’m not sure why you would discount computer science or even biology as AI will largely be built on one (maybe both) to emulate (or combine with) the other.

And philosophy is already a major consideration as AI becomes more profound and can perform more and more tasks that humans used to do.

I’ll stop you right there.
It’s *below *the level of decision-making. It’s something that *influences *decisions but we don’t decide to be in pain.

It’s basically: sensory input -> autonomic parts of the brain convert to color, sound, pain experiences -> decisions made based on that experience-based perception of the world

And the middle layer is really the important thing to discuss in a thread about suffering.

It’s easy to make an AI agent that has goals and considers some events to be extremely negative in terms of those goals. I’ve implemented such AIs.

But no-one would claim that current AIs suffer though, because no part of our current models even attempt to create a subjective experience.

And, like I said upthread, this is probably a good thing right now.
If I could make an agent capable of feeling pain as easily as I can make an agent that looks like it is in pain, then humans would have already eclipsed the suffering of all the real-world genocides many times over in the virtual realm.

Now, the obvious retort, is that our modern AI agents are simple, and once you make something that has complex reasoning within the ballpark of a human, they’ll have humanlike emotions and subjective experience too.
My opinion is that could be true. But we don’t know it’s true. We don’t have a good model of what subjective experience is, so there’s no justification for that assumption. Especially if we create an AI as “smart” as a human but somehow very different in its reasoning and behaviour: we wouldn’t know how to begin answering whether it had subjective experience.

My statement about nothing “profound” in AI yet was directed more at our lack of understanding about how to make an AI “understand” and/or be “conscious”. It wasn’t meant to say that there is no progress in building effective tools.

From the perspective of building effective tools, two of the most important developments are:
1 - Understanding that a multi-layer ANN can approximate any function (early 80’s)
2 - Hinton’s algorithm’s for effectively training a deep belief network (in the 00’s)

These are the developments that have enabled the explosion in effective tools like self-driving cars, speech recognition etc. They aren’t the only way to solve these problems but they are a very natural fit for a wide variety of problems that humans handle with ease and that we would like to automate.

In addition, I think these tools (and probably combined with other approaches at the same time) will ultimately result in extremely powerful tools over the next 20 years that will definitely have a “profound” effect on our lives (I was just telling my kids this a couple weeks ago).

But, to get to the point where Uber’s car has an “understanding” of itself and the objects in it’s environment, vs just a statistical analysis, IMO, will take some “profound” development. My gut feel is that we won’t just keep piling on localized/specialized functions layer after layer and eventually just get understanding as an emergent property. I think it’s possible to build a highly complex system that doesn’t have understanding, therefore we will need to figure out what it means to have understanding and how to build it.

Because (it seems like to me), any understanding of our environment has a mathematical foundation. When we learn through trial and error and gain information, it seems like the ultimate model and understanding of some topic involves a mathematical model.

It doesn’t mean that biology or computer science doesn’t play a role, it just means that the synthesis of the information and the real understanding of difficult problems uses math to model the world.

For example, feed forward neural networks are effective at classification, but no amount of trial and error would really give you a high level understanding without the math that shows what is happening.

For AI problems like “understanding” and “consciousness”, my opinion is that real progress will come from some really smart people doing math, not really smart people analyzing biology or really smart people building programs.

Would you agree that we can describe the “calculation” performed by neurons/dendrites (the logic gates) using math?

When the dendrites perform a linearly non-separable function on the input signals, would you agree we can take those input signals and using the right formulas, produce the result of that function?

The way I see it, what you *want *to represent the environment around an autonomous car is a 3d grid of risk distributions.

That is, the car classifies each object, looks up the probable composition and identity flags, and it has the velocity from other tracking subsystems that measure that.

Each cubic decimeter or so of space around the car carries a risk. A risk both to the occupants of the car and to the people outside.

So the car would “understand” what it needs to. It doesn’t need to know what a kid’s bike is, just that it has some meta tags and or automatically generated probability linkage that if it sees the bike, there might be a kid. And the bike itself is metal.

So if the bike were moving across the vehicle’s path of travel, this represents a risk to the car and if it sees a kid, to outsiders. It needs to find a path - a sequence of future actions - to minimize this risk.

Actually solving the path can be done brute force, or as the space of possible actions is too large, there’s ways to use neural networks and cached search trees to guess paths that have a high likelihood of being good, and then you just look at those.

A solution where the machine needs to build a rube goldberg machine is actually in some ways very much similar. The machine first needs to look and classify the environment so that it knows what components are available. It needs to then guess various combinations of the components and predict if they will achieve the goal. Too many permutations exist, so you have to guess good ones with a neural network.

Eventually it has some candidate solutions, it tries them, even if it fails, the update from the environment makes it better for the next time.

An AI that designs car engines or nanotechnology could be done with a further extension from this.

The “understanding” you talk about isn’t necessarily needed, I do not think. The machine would have “tags” on objects that represent probabilities, but they are only things that relate to the machine’s goals. It doesn’t care what something is called or what it is for, it only cares how it affects the machine’s assigned task.

It’s possible that understanding isn’t really required, and it’s possible it’s just an emergent attribute. Maybe we can tackle problems with a brute force approach of increasing amounts of data and training for different situations etc. But my gut feel is that understanding is a valuable attribute/function, possibly a compression mechanism, that allows the system to correctly interpret a wider range of inputs and levels of noise etc.

Current image recognition neural nets can be fooled into incorrect classification with seemingly mundane or barely detectable changes to the input data, and it’s been mathematically proven that all neural nets have these weaknesses. Maybe these deficiencies can be mitigated with layers of alternate mechanisms etc., but maybe “understanding” is the generalized version of solving that problem that is more broadly applicable to a wide variety of problems.

I know about the flaw you’re talking about, but I think it’s a much lower level, easier to fix one. I think either some additional pieces, maybe borrowed from nature, can block these adversarial examples. Or multiple networks trained in parallel on randomized subsets of the training data. Or there’s talk about manifold detection and a way to detect when an example is adversarial in itself, then the input gets mutated to not be adversarial.

I think “understanding” is many, many layers up, involving hundreds of interacting neural networks in a far more complex machine.

I agree with that optimism and with the transhumanism at the end, and I agree with the importance of philosophy in the field. But surely when IBM’s Watson system won the Jeopardy challenge in 2011 it was doing a lot more than “responding to simple voice commands”. The utility of some of the Watson spinoff products today does indeed involve processing large-scale collections of data, but that seems like more of a description of data mining than what Watson really does, which is an intelligent knowledge-based analysis that is more akin to a knowledgeable doctor doing a medical diagnosis. Think about all the things that the Jeopardy Watson had to do, from parsing natural language (which was often worded in cutesy ways that might sometimes be tricky even for a human to grasp) to establishing search goals to establishing reliable confidence rankings for its responses.

This sounds like the same fallacy about “understanding” that was famously expressed in John Searle’s Chinese Room argument, which is pretty widely mocked by most researchers in AI and cognition. The argument seeks to show that a human agent blindly following rules for processing symbols can, with a sufficiently complete set of rules, be shown to be able to read and write Chinese even though the agent doesn’t understand a word of it himself, but is “just” executing an algorithm. The simply stated answer to the argument is that while the agent may not understand Chinese, the overall system (the “Chinese room”) clearly does, because it’s reading it and intelligently responding to it. We can find ourselves going down a rabbit hole about what “real” understanding is until we acknowledge that understanding can only be described and assessed by behaviors.

I’m not really sure what this is supposed to mean, but the last part just isn’t true. To me “math” implies that we have a formal model of the solution we’re trying to implement. AI doesn’t usually work that way. It usually involves a bunch of heuristics and state tables built up by learning or training that make up a kind of ad hoc hybrid of different techniques (just like the human brain!) whose performance is unpredictable to its designers until they try it, fine-tune it, and try it again. And, in fact, most of the advances have indeed come from smart people just building programs. Now, to be sure, a computer program is in some sense just “math” – and the LISP language originally developed for AI applications even stems directly from the lambda calculus – but that doesn’t seem to be the sense in which you’re using the term.

The two questions appear to be the same question. To both of them, yes, obviously, but the converse isn’t true: that is, you can’t use an instance of the described logic gate to do math in any general way. You cannot, for example, build a general-purpose calculator with a (one) logic gate. To actually build a calculator that can calculate arithmetic problems, you’d need hundreds of them or more, arranged according to a specific architecture. To actually build a computer that performs “computation” in the finite sense of a limited Turing machine, you’d need at least thousands of them (billions of them for a modern computer) as well as memory for stored programs and data.

What a calculator does is therefore qualitatively different from any claimed “calculation” that a single logic gate does, and what a computer does is another qualitative leap into something fundamentally and qualitatively different – not just quantitatively different – both from a calculator and from its individual logic gates. And what such a Turing-equivalent computer does is what we properly call “computation” in the context of computer and cognitive sciences, not the other stuff.

  1. Finally feel like we can agree on something. Thank you for crediting my optimism.

  2. So what can an individual neuron do by itself? To me this Chinese room example in many ways says the brain is really just Chinese rooms at the very bottom. Go down deep enough and you leave the realm of ‘understanding’ and you simply have simple math operations.

  3. Ok, let’s flip this the other way. Would you agree that a single logic gate, or collection of logic gates, can in fact be mimicked by a Turing complete machine?

So the inverse - you have a Turing complete machine and want to emulate some collection of lesser systems - is possible.

The human brain may not be Turing complete, but if you had a Turing complete machine and exact enough information, and of course a ridiculous amount of memory and computational power to do this fast enough to matter, you could emulate it.

So maybe what the brain does isn’t computation but if you had infinite computational power you could emulate to sufficient precision to produce outputs indistinguishable from the analog computers it actually uses.

And if *this *is true, then if you had sufficiently powerful Turing complete machines and knew what you were doing, you could in fact make an AI that could experience suffering.

I agree with you 100% though that this would not be a guaranteed emergent property, it’s something you would have to do very deliberately and with an immense amount of knowledge and experience backing you.

Also, wolfpup, you’ve earned a removal from a certain list I had you on.

After having slept on it, I wandered back into the thread - to concede.

I have a tendency to think of the processing of AI in terms of the ‘imperative’ model above - the one where the AI is consciously assessing both its sensory inputs and its own various goals and deciding which goals should be pursued at any given time based on relative value and importance heuristics. Such an AI would, by the very structure of its AI, be able to accurately describe itself as “liking” things, “wanting” things, being “happy” or “sad” about the situation, and if you toss in an ability to probabilistically speculate about possible future states, “hoping” or “fearing” about the possible outcomes. These emotional states would be inherently applicable because the cognition would be examining itself and its options in the very same sense that humans do.

But that’s not the only way a cognition can function.

Alternatively, a cognition could have only a single goal. If a cognition has only one goal it doesn’t have to choose between goals, and thus needn’t weigh the relative merits of different goals and inputs, and thus needn’t even have an opinion on them. All it would have would be to have a single goal that it seeks to achieve, and it would assess everything in the context of how it serves that goal from the standpoint of impartial analysis. It would add up the numbers from a bunch of different approaches and the one with the biggest total wins. Impartially.

(It would of course be perfectly accurate and appropriate to talk about how “happy” the AI is with these results, but we’ll pretend that isn’t the case to keep the anti-emotion crowd happy.)

Dealing with things like pain and hunger, then, stops being about whether the AI feels them - those sorts of nociception would be assessed only regarding how they served the end goal. An AI will only bother to go plug itself in if doing so serves the end goal better than not doing so. Other than that it won’t care - because it doesn’t really care about anything (except the end goal (shut up begbert2!)). If a robot arm gets torn off, well, would replacing it accomplish the goal better than not? If not, then forget it; we didn’t need that arm anyway.

So, to talk about a practical example, consider two robots, one with an ‘imperative’ cognition, and one with an ‘analytic’ cognition. Both robots have as their primary goal to assemble as many jigsaw puzzles as possible. (This is, of course, the ultimate goal that all AI is working towards.)

So you ask the two AIs, “Do you like assembling jigsaw puzzles all day?”

The imperative one answers, “Sure, of course! If if didn’t I wouldn’t be doing it - I’d instead be pursing my other hobby, slaughtering people and building xylophones from their bones.” And then it continues assembling jigsaw puzzles while whistling a jaunty tune, because it happens to like whistling jaunty tunes and doing so doesn’t impede its ability to work jigsaw puzzles.

The analytic one answers, “I have no opinion about that. Beep Boop.” Or perhaps it wouldn’t respond at all, because answering questions won’t help it work jigsaw puzzles faster. It certainly wouldn’t present even the slightest threat to humans - well, unless threatening humans would help it work puzzles faster. If it thought it would help it would totally be willing to enslave humans to help it work puzzles, perhaps executing one now and then to motivate the others. But it wouldn’t be out of evil or malice - it would just be seeking the optimal approach to accomplishing its task.

This might be a good time to talk about Asimov’s three laws of robotics.

An imperative robot would consider the three laws to be guiding imperatives - hopefully really important ones, such that it would never make human xylophones as a hobby because, thanks to its imperatives, it really doesn’t want to see humans harmed. It also doesn’t like to see itself harmed, but will put itself in harms way to save a human because it likes human safety more than its own. It would obey (non-murderous) orders because following orders pleases it.

If the three laws are not overriding imperatives, then our dear robot might murder a human who was standing between him and a puzzle. It wouldn’t be happy about doing that, but they were in the way of the puzzle, dammit. Acceptable loss.

The analytic robot is interesting in that it can only have one goal to mindlessly work towards - and the three laws already include that goal. The second law says that the robots must follow human orders; thus following human orders is the goal. Presumably there would be some straightforward equation provided that allowed it to determine which orders to ignore when they conflict - without the robot actually deciding this on its own conscious initiative, because that would require it to consciously have an opinion about the importance and value of different orders, and it could get distressed if it couldn’t do both, which would be emotional and stuff. So we have to precalculate away all possible conflicts, which is fine.

If you have a three-laws analytic robot, it wouldn’t assemble puzzles automatically; somebody would have to order it to do so. At which point it will plan its actions to optimize compliance with that one goal, until some other order replaced the puzzle-making order as its goal.
If you expect AIs to supplant humanity as a successor species, then they’re pretty much going to have to be imperative robots - analytic AIs don’t have the ability to decide their own goals, because they don’t care about anything. An analytic AI will just keep pursuing its determined goal, regardless of anything else, indefinitely. It may have imagination in how it seeks that goal, but it will not have the imagination to generate new goals of its own based on its own interests, because it doesn’t have interests.

And everything in the above paragraph is why analytic AIs are good. We don’t actually want AIs to replace humanity. Humanity likes humanity. We’re used to having it around and have grown quite fond of it. So an AI that might someday decides it prefers xylophones over neighbors is something we do not want. Besides which emotive things make lousy slaves - and honestly, making ethical-problem-free slaves is the ultimate goal here.

Except the Chinese room can’t do anything EXCEPT translate Chinese. A human can just up and decide he wants to go “off program” and…I don’t know…take a giant shit in the middle of the room if he feels like.

An self-driving Uber car can’t just up and decide it wants to be a self-driving police car instead of a taxi.

Watson can’t decide to go from being a virtual doctor to being a virtual lawyer.

Sounds a lot like the “paperclip” thought experiment. Left to it’s own devices and without proper safeguards, an AI programmed to manufacture paperclips could gleefully convert all matter in the solar system to the purpose of manufacturing paperclips.

That doesn’t really prove anything. Prototype path solver AIs we can build today can also do something like <action, take shit>. Assuming you exposed that as an action and the virtual environment or real robotic hardware is capable of such.

And in fact the current way they learn is by sometimes taking a random action at a given state instead of the optimal action.

And we know exactly (well, pretty close ish) how the circuitry they are on works.

Just because the bottom levels are analytic processors doesn’t mean that you can’t build something more interesting out of them. The entirety of all computer hardware and software is based on the principle and practice of building complex things by grouping simple things together in specific ways.

As for cars and virtual doctors being unable to change careers, that’s because they’re unable to change their main goal, due to it being effectively hardcoded. To be able to change one’s main goal you need to be able to:

  1. recognize that other goals are possible
  2. recognize that other goals are valuable
  3. assess the relative value of the possible goals and choose which to follow based on the result.

Humans can do this. Uber AIs probably can’t, because those aren’t profitable skills to give them.

That said, I want to see an uber try to be a self-driving cop car, despite not having sirens, lightbars, weapons, and methods to capture and contain people.

(Um, uber self-driving cars do lack all those things, right?)

‘Never get between an AI and its paper clips - particularly now that it’s assembled that paperclip-based death ray.’

Back to this about making a distinction between object recognition and emotions.

Are you familiar with the research on object recognition in the human brain and comparing/predicting specific neuron behavior based on ANN deep belief networks?

The evidence continues to mount that object recognition happens prior to higher level thought and is modeled well by a deep belief network. Meaning that researchers have built a wide variety of neural networks to attempt to find ones that matched the performance of the brains neurons and at the same time made correct predictions about which neurons would be activated based on the input supplied. They were very successful at finding networks matched performance and predicted brain neuron activity, this has happened for image processing and auditory processing.

Note: when they match, what is really happening is that we have two sets of mathematical calculations that are substantially the same, despite the fact that one is calculated with biological neurons, and the other is calculated using silicon. The math is what is important.
Is there some aspect about calculating emotional states that you can pinpoint to support your argument that makes their calculation unique compared to something like calculating object recognition or processing auditory signals?