After having slept on it, I wandered back into the thread - to concede.
I have a tendency to think of the processing of AI in terms of the ‘imperative’ model above - the one where the AI is consciously assessing both its sensory inputs and its own various goals and deciding which goals should be pursued at any given time based on relative value and importance heuristics. Such an AI would, by the very structure of its AI, be able to accurately describe itself as “liking” things, “wanting” things, being “happy” or “sad” about the situation, and if you toss in an ability to probabilistically speculate about possible future states, “hoping” or “fearing” about the possible outcomes. These emotional states would be inherently applicable because the cognition would be examining itself and its options in the very same sense that humans do.
But that’s not the only way a cognition can function.
Alternatively, a cognition could have only a single goal. If a cognition has only one goal it doesn’t have to choose between goals, and thus needn’t weigh the relative merits of different goals and inputs, and thus needn’t even have an opinion on them. All it would have would be to have a single goal that it seeks to achieve, and it would assess everything in the context of how it serves that goal from the standpoint of impartial analysis. It would add up the numbers from a bunch of different approaches and the one with the biggest total wins. Impartially.
(It would of course be perfectly accurate and appropriate to talk about how “happy” the AI is with these results, but we’ll pretend that isn’t the case to keep the anti-emotion crowd happy.)
Dealing with things like pain and hunger, then, stops being about whether the AI feels them - those sorts of nociception would be assessed only regarding how they served the end goal. An AI will only bother to go plug itself in if doing so serves the end goal better than not doing so. Other than that it won’t care - because it doesn’t really care about anything (except the end goal (shut up begbert2!)). If a robot arm gets torn off, well, would replacing it accomplish the goal better than not? If not, then forget it; we didn’t need that arm anyway.
So, to talk about a practical example, consider two robots, one with an ‘imperative’ cognition, and one with an ‘analytic’ cognition. Both robots have as their primary goal to assemble as many jigsaw puzzles as possible. (This is, of course, the ultimate goal that all AI is working towards.)
So you ask the two AIs, “Do you like assembling jigsaw puzzles all day?”
The imperative one answers, “Sure, of course! If if didn’t I wouldn’t be doing it - I’d instead be pursing my other hobby, slaughtering people and building xylophones from their bones.” And then it continues assembling jigsaw puzzles while whistling a jaunty tune, because it happens to like whistling jaunty tunes and doing so doesn’t impede its ability to work jigsaw puzzles.
The analytic one answers, “I have no opinion about that. Beep Boop.” Or perhaps it wouldn’t respond at all, because answering questions won’t help it work jigsaw puzzles faster. It certainly wouldn’t present even the slightest threat to humans - well, unless threatening humans would help it work puzzles faster. If it thought it would help it would totally be willing to enslave humans to help it work puzzles, perhaps executing one now and then to motivate the others. But it wouldn’t be out of evil or malice - it would just be seeking the optimal approach to accomplishing its task.
This might be a good time to talk about Asimov’s three laws of robotics.
An imperative robot would consider the three laws to be guiding imperatives - hopefully really important ones, such that it would never make human xylophones as a hobby because, thanks to its imperatives, it really doesn’t want to see humans harmed. It also doesn’t like to see itself harmed, but will put itself in harms way to save a human because it likes human safety more than its own. It would obey (non-murderous) orders because following orders pleases it.
If the three laws are not overriding imperatives, then our dear robot might murder a human who was standing between him and a puzzle. It wouldn’t be happy about doing that, but they were in the way of the puzzle, dammit. Acceptable loss.
The analytic robot is interesting in that it can only have one goal to mindlessly work towards - and the three laws already include that goal. The second law says that the robots must follow human orders; thus following human orders is the goal. Presumably there would be some straightforward equation provided that allowed it to determine which orders to ignore when they conflict - without the robot actually deciding this on its own conscious initiative, because that would require it to consciously have an opinion about the importance and value of different orders, and it could get distressed if it couldn’t do both, which would be emotional and stuff. So we have to precalculate away all possible conflicts, which is fine.
If you have a three-laws analytic robot, it wouldn’t assemble puzzles automatically; somebody would have to order it to do so. At which point it will plan its actions to optimize compliance with that one goal, until some other order replaced the puzzle-making order as its goal.
If you expect AIs to supplant humanity as a successor species, then they’re pretty much going to have to be imperative robots - analytic AIs don’t have the ability to decide their own goals, because they don’t care about anything. An analytic AI will just keep pursuing its determined goal, regardless of anything else, indefinitely. It may have imagination in how it seeks that goal, but it will not have the imagination to generate new goals of its own based on its own interests, because it doesn’t have interests.
And everything in the above paragraph is why analytic AIs are good. We don’t actually want AIs to replace humanity. Humanity likes humanity. We’re used to having it around and have grown quite fond of it. So an AI that might someday decides it prefers xylophones over neighbors is something we do not want. Besides which emotive things make lousy slaves - and honestly, making ethical-problem-free slaves is the ultimate goal here.