I suppose it might be worth noting that this thread was, initially, explicitly about whether it was moral to create AIs that can feel negative things like “pain, suffering, fear, sadness, deprivation, frustration, anxiety, loneliness, sadness, grief, discomfort, hunger, sickness, fatigue, worry, despair”. These negative things can actually be divided into two categories: inputs, and reactions.
Inputs: pain, hunger, sickness, fatigue, discomfort
Reactions: suffering, fear, sadness, deprivation, frustration, anxiety, loneliness, sadness, grief, worry, despair
Inputs are things that wouldn’t really be part of the AI - they’d be things reported to the AI by it’s supporting hardware, the way they are reported to a human by the human’s body. Physical pain is a report about damage to the hardware. Hunger is a report about a shortage of a resource. Sickness (as experienced by the mind) is actually an ongoing series of signals about something that is harming the system. Fatigue is a report of a similar shortage or wearing out of the system (albeit one I don’t understand as well on the biological level). And discomfort is just really mild pain - a report of undesirable, but not damaging, circumstances.
There’s nothing inherently evil about the idea of being able to detect damage or a shortage of a resource, nor is it evil to suppose that the AI would find reports of increasingly bad damage or increasingly pressing shortages harder and harder to ignore, to the point they made thinking about other subjects impossible. Arguably, for an AI to react appropriately to damage and shortages it has to find them both pressing or impossible to ignore, and also a sensation they’re extremely averse to experiencing. Otherwise they might not prioritize avoiding, mitigating, or otherwise dealing with the problems in time to avoid damage or shutdown.
Of course an AI might not be set up to receive any reports of damage or shortage. Such an AI will probably not do as well in the real world as one that could realize that it had better go plug itself in, but it’s still possible they could exist. An AI with no such diagnostic inputs about its hardware (or simulated hardware, if the AI’s entire environment is simulated) would indeed feel no pain or hunger.
Obviously, of course, deliberately causing pain or hunger to a system with such self-analysis in place could be considered cruel. Or it could be considered training the AI - letting them learn not to put their hands on the stove by simulating pain when they touch a simulated stove. Raising a child is indeed a bitch.
As for the various emotional reactions, all of them are various kinds of internalized recognition of undesirable situations:
suffering - the internalized recognition that one is undergoing pain, and that it’s a bad thing.
fear - the internalized recognition that a bad outcome is a possible and possibly likely outcome, with a wish to avoid that outcome coming to pass.
sadness - the internalized recognition that the situation is bad.
deprivation - (Is this related to hunger? Then like suffering, but with hunger.)
frustration - The internalized recognition that the optimal outcome is not happening despite good-seeming actions being taken, and that that’s a bad thing.
anxiety - A generalized fear, possibly without a clear idea of a single threat source. (I’ll come back to loneliness - An internalized recognition that one is alone, and that that’s a bad thing. (Requires the AI to have first learned that being with others is a good thing.)
sadness - Same as it was the first time. 
grief - An internalized recognition that something bad has happened in the past with lasting negative effect - possibly a mix of loneliness and sadness, if it’s grief over losing someone.
worry - Mild fear.
despair - An internalized recognition that the situation is bad, and that no forseeable outcomes will be good regardless of any actions taken.
Now, one can’t help but notice I said “internalized recognition” a lot. Personally I don’t think that the average AI is aware of all their internal mental pathways. Okay, the average modern AI is probably aware of none of their internal pathways - they’re not self-aware yet. But even when one becomes self-aware, I’m not sure that the entirety of their mental state will be constantly up for review by their mental state. Heck, thinking about it I’m not even sure that would be possible - each thought about it’s own thoughts would be another thought that would have to be thought about.
So if an AI isn’t consciously aware of the full line of reasoning behind each of its thoughts, to the AI those unreviewed lines of thought are just going to be there - they’ll have this pressing imperative telling them that things are bad, but until they trace down and review the line of thought that is supplying that imperative they won’t know why. You know, just like with humans. Now, humans can have imperatives and sources of ideas that they can never trace back to their sources - and perhaps a sufficiently complicated AI might have those too, if the overhead of tracing back and/or storing the sources of existing thoughts and opinions becomes too high. Or perhaps that won’t happen, and an AI will always be able to trace back and report exactly why they feel scared or why they think the moon landing was faked. About that, only time will tell.