I of course meant internal rewards, that is, “good” emotions.
I’m well aware of your position.
Didn’t find any cites eh? All the sites on cognitive neurology said that emotion influences behaviour but does not dictate it, didn’t they?
Well, sure, I can provide a cite on neurochemistry. However, I obviously don’t agree with the absurd straw man you just invited in – that anything neurochemical has nothing to do with neural signalling. :rolleyes:
In fact, I’m going to add that paragraph to the list of begbert2 classic quotes.
:smack::smack::smack:
You still haven’t actually responded to my argument here, so let me try again.
You won’t allow for the fact that we “just are” motivated by things like curiosity and aggression. You keep asking why we’re motivated in that way, that there must be some underlying motivation (e.g. sometimes we act out our curiosity because it will make us happy).
But it’s a turtles explanation because it requires as a given that we “just are” always motivated by happiness / reducing misery.
I can turn your argument around and ask why are we motivated by happiness / reducing misery?
It was a slip.
But I don’t know why you have such a problem with the world pleasure; it doesn’t imply euphoria (you can have a “pleasant commute” for example), and actually seems less loaded to me than “happiness”. But fine.
Well, it’s complicated. Current thinking is that there are at least 3 decision-making mechanisms in the brain; the Visceral, Behavioural and Reflective. And these are not abstract; they have separate loci in the brain.
The Behavioural layer is the only layer affected at all by emotions, and it is not dictated by them. And this layer is not at the highest level. If any layer can be considered to be “in charge”, it would be the Reflective.
What data?
This is the typical kind of flip-flopping that creeps into the “there’s no such thing as a selfless act” debate.
One minute you’re using words like happiness. The next minute you’re saying that you just mean the “best” option, and that any decision-making machine can be said to be seeking happiness, even, say, a chess computer.
Well, those are two different things. Of course any decision-making system must ultimately make decisions so it must consider some options better than others. I’ve already said that many times.
But to the hypothesis that that criteria is about inducing or preventing a feeling, a psychological state, whether now or in the future, this is simply not the case.
If I punch some guy, it’s not to feel good: I’ll probably feel “shaken up” for quite some time afterwards. But yeah, for whatever conscious reason at the time, I obviously consider it a good idea.
I know, it’s odd isn’t it? And I’m not just an AI-geek; I’m now persuing a career in neuroimaging, and studying the brain and cognition.
So…why can’t I agree with your theory? It’s a mystery…
Well, that’s your basic assertion, and it’s incorrect. Again, where’s that cite?
So again, you’ve refused to answer the question. Probably because the prediction of your own theory seems absurd even to you.
But fine, I can answer it. Even with an arbitrary amount of pleasure for harming babies, a person will not necessarily become a killer. Such a decision would be made at a conscious level and involve conscious reasoning. Such reasoning is (obviously) not entirely motivated by happiness, so even if it were clear that the happiness of killing would far outweigh any feelings of regret, there is no reason to assume a person would kill.