Let us approach the subject as a dualist; that is, as someone that seperates things into mind and matter. On the matter side, we have sensation: indications of the physical environment. On the mind side, we have emotions: indications of the mental environment. As we begin to disregard dualism, we begin to see that the mind is not really more than matter (or is it that matter is not really more than mind?) and that the requirement of emotion might be arguably linked to self-awareness. Without emotion, there is nothing to be aware of. No extension of mind, so to speak.
That’s how I think of it, and why I think that emotions are not only indispensible, but able to be understood by various methodologies involving reason.
At the very least, we might consider epiphenomenalism, where consciousness is simply a phenomenon that rides on top of our physical beings, and is caused by the physical things, but in no way affects the physical things (it is the end of causation). Here we see the necessity of emotion, as well: it is the mental effect of physical actions as much as any other part of consciousness. I’m not a big fan of epiphenomenalism, but certainly there are many reasons why emotions would be a necessary component of thought.
From a strictly behavioral standpoint, emotions don’t even “really” exist, only public behavior that we’ve named. In such a case, emotion-words are necessary since they name behavior we use in various ways/activities. But even though we’ve properly qualified what we mean by “emotion”, the statement “emotions are necessary” is still quite true, no?
In fact, I’d likely rather understand how consciousness would exist without emotion, and yes I’ve seen Star Trek.
so it seems, AHunter3, that what you meant to say wasn’t so much that emotions are necessary for thoughts (as in cognitive “atoms”, if you will), but that emotions are necessary for human thought. very well, then, i can see that.
incidentally, how would you go about programming emotions if you were to take on some monumental task of creating a machine that might be said to think, or at least one that models thought? would they be your value functions, the ones that assess a given state and return a value that represents the, say, utility of that state?
also, would you refrain from calling someone with brain lesions that caused them lack emotions self-aware?
I don’t have any cite, but you could perhaps investigate this. I read twice about cases of people with brain damages which severely reduced their ability to feel emotions. As a result, they were unable to behave in a rational way. They would sever ties with family or friends, wouldn’t work, etc… They were perfectly aware of the likely results of their behavior (like not showing up to work would result in them being fired, not having money, ultimately loosing their home, etc…) but just couldn’t care less. Their ability to think was intact, but without any use as long as they didn’t feel fear, shame, etc…
However, for what I remember, they still had some goals and desires, so they obviously still had emotions, but not enough for their behavior not to be deeply flawed.
It sounds to me from reading the quote that the idea is that the emotional processing centers in the brain serve an integrative purpose. They take in vast amounts of data and run some complex algorithms on them to produce simple, actionable perceptions “I think I’ll go to the Chinese restaurant”, “I hate Bill”, “I like Mary”. Without these centers we are forced to “think through” all this processing deliberately. Think of it as trying to play the piano without benefit of “motor memory”. Hunting and pecking for each key using the conscious mind. Whether such integrative centers will necessarily produce the internal qualia of hatred, love, depression, etc… is still a question though. One could imagine an AI with a different kind of integrating center that didn’t have “feelings” but that still had some unconscious integrating center that allowed it to make decisions and set goals.
I’m not sure that there is a prevailing theory. Different people will give you different answers. Piaget, for example, claimed that self-awareness came when a child understood the concept of “not-me” (thus being able to distinguish between what himself and the outside).
“The brain, that pound and a half of chicken-colored goo so highly regarded (by the brain itself), that slimy organ to which is attributed such intricate and mysterious powers (it is the self-same brain that does the attributing), the brain is so weak that, without its protective casing to support it, it simply collapses of its own weight.”
That is a quote from the Tom Robbins book Even Cowgirls Get The Blues. I don’t suppose it answers your questions, but it seemed fitting to post here.