Happy is a release of endorphins as a metaphoric carrot for having done something which evolution smiles upon. Really, no emotion is an “emotion” as such, it’s a release of neurochemicals that is either pleasant or not. So long as the logic switch didn’t operate by blocking the receptors for these chemicals (only their release), then one could be “happy” by injecting morphine, for example. But otherwise, no.
I suppose that there would be no satisfaction for doing what is good for yourself, without emotions. So one might actually want to turn it on when you’ve done good, so that you can “dope up” as a self-reward.
Similarly, you might flip the switch back to emotionality if you’re in danger, so that your body raises its adrenaline levels, etc. which may help you survive.
There is a difference. One of those gets routed through our emotion processing centers (or however it really happens, but you get the idea), if the emotions are turned off, then you are effectively blocking that drive because, in our system, it is dependent on emotions. Maybe. Although it’s possible our emotions are really just a modifier to a signal that would still get through to some degree even if the emotions were blocked.
I completely agree. Although, it’s possible that feedback from our emotions alters our own system in a way that can alter our underlying drives, maybe.
Assuming you have the memories allowing some sort of a comparison of before and after, is there a logical reason why someone would forego the possibility of happiness? Seems like they wouldn’t.
It seems like there are only a small number of basic things that would be considered “good”, unless the drive to survive and procreate is not suppressed by the lack of emotions, I guess in that case there could be all kinds of things that are “good”, but many would be considered “bad” by current societal standards.
Most likely you’d simply sit wherever you flipped the switch and starve to death since you wouldn’t care about anything, including survival. For the same reason you wouldn’t flip the switch back, you wouldn’t care and so wouldn’t make the effort. If there’s some sort of mental inertia involved so you keep on with whatever tasks you were involved in when you flipped the switch, then you’d be a sociopath. You’d go on with whatever you were doing along the simplest, straightest path regardless of the results to you or others.
Thing is, people pretty close to this already exist - they lack emotions as such (often due to brain damage), but still feel physical pleasure (unlike the pure-logic scenario) so they still have drives to do things. Their judgment is terrible; they can’t so much as win a card game because they just don’t care if they do, so they just toss out cards randomly or wander off. They tend to get caught in loops between two alternatives because they don’t get frustrated like a normal person and just say “screw it, I’ll just go do X”. And they do things that they know will lead to ultimate disaster because they lead to immediate physical pleasure, and they are incapable of fear of future consequences.
But if this is a sci-fi hypothetical anyway, I think if we had the technology to so perfectly isolate emotions at a neurological level, surely remapping sensory input and decision-making to some other part of the brain would be a possibility as well.
“She cheated on me” = “Anger” could become “She cheated on me” = “Statistically, the unfaithfulness of an unmodded female increases by 67% after her first incident. The chances of a modded human like me with my facial arrangement pattern and muscle structure living in a community like mine has a 82.9% chance of finding a replacement sexual mate within 5 years. I will end this relationship and seek another.” (Yes, yes, perhaps reproduction won’t be a drive at all any longer; it was just an example.)
I’ve always believed emotions to be a sort of fast-tracked judgment of people and situations that is rooted in logic beneficial to us; where it fails is in being able to identify differing nuances in a novel situation – in other words, it wants to treat new situations like the ones from our past because that results in the quickest, most efficient judgment, but that judgment won’t necessarily be as well-informed as a cold, rational analysis that takes into account new facts and changes.
I think the most basic form of this phenomenon is acquired fear of somebody or something – a bitten child might forever be afraid of even the friendliest puppy – but that same process could perhaps be extrapolated to many of our drives and the means by which we seek their satisfaction. That is to say perhaps all humans want love, but how any individual human satiates that want is highly variable and dependent on individual genetics and environments. This is why culture, parenting, media, religion, etc. all play roles in dictating what makes us happy or sad – they’re rewiring us emotionally. (Purely IMO. No data or research backing this.)
That’s not the point. You wouldn’t be upset about the lack of happiness even if you had the memories because you’d be incapable of being upset in the first place.
It’s like how Data routinely sees humans laughing and crying; he can observe these things, but without his emotion chip he’s incapable of actually feeling them. He is intrigued by them because he’s programmed to learn, but he is not upset over his inability to be happy because he can’t be upset.
The OP, as presented, would likely result in this. But what if you could give those humans a goal? (Or if they could choose one before flipping the switch?)
It’s interesting you mention Data, because that’s kind of where the idea came from. In one of the Star Trek movies, I think he’s given an emotion chip or something. I can’t remember anything about what happened after he had it implanted, but I wondered what the motivation was for him to “want” it implanted in the first place, if he was a creature of pure logic.
I don’t think I could be any less emotional and still enjoy the benefit of happiness. Overall though, it appears to me that I am more generally happy than people who feel their emotions more intensely.
It also seems as if it’s a natural outgrowth of balancing multiple inputs in a way that can not be linearly calculated.
I didn’t think you would be upset at all, 2 points really:
Sage Rat seemed to be implying that while in his/her current emotional state, he/she would choose logic only, which seems an unlikely choice for anyone that is not suicidally depressed.
If you were in the logical state, remembering that the emotional state produced system feedback that was more positive than the current neutral state, then it seems like a reasonable conclusion to switch because there is nothing particularly attractive to the logical state from the logical state point of view, it just is.
I can’t separate my rational brain from my emotional brain; they are the same brain. If I’m crossing a street and there’s a car coming, I need my emotions to shove me out of its way. There’s no time for my brain to rationally evaluate the situation and come to the conclusion that my life is in danger. It’s my rational knowledge that went on ***before ***the incident that had created the emotion in the first place. If I didn’t know anything about cars or the damage they can cause or injury or pain or death . . . I’d have no reason to get out of the way. So we need both, our reasoning and our emotions. Either one is pretty useless without the other.
And without emotions, you’d have no way to verify whether your thinking is valid. If nothing caused you pleasure or pain, you’d never have good or bad consequences from your actions. Everything would be pointless.
Data, his brother Lore, and android emotions in general are a long-running subplot of Trek – and perhaps sci-fi in general. It’s mostly anthropomorphizing, unfortunately, because people like drama (even android drama) more than AI algorithmic analyses (strange, innit?).
ETA: I’m not sure what Data’s supposed motivations are. Perhaps “learn everything”?
That we know of. So far.
I disagree. I’m not suicidally depressed – very happy, actually – but I think a cold, calculating efficiency would allow me to be more, well, efficient. Happiness doesn’t have to be one’s end goal in life and can be a distraction if the quest for it is keeping you from doing what you’d rather be doing.
That entirely depends on how you define “more positive” – i.e., what is happiness to an emotionless being? It’s whatever you program it to be, IMO, which may or may not be the human emotional state of joy. Lacking any sort of programming, however, I would agree that the being would just stand there doing nothing – like a computer without instructions. But that’s a boring assumption.
Since this is a hypothetical, I don’t think it’s a stretch to say that you’d only toggle off the emotions with some other purpose in mind, and that other purpose can be your goal. The being could seek to further that goal instead of seeking to maximize internal positive vibes.
They’d be walking disaster, a danger to themselves and others. Our brains are built to make judgments using emotions; without emotions, our ability to make good judgments falls apart.
To use a more physical analogy; imagine a mad scientist who wants to improve you by removing your inefficient organic arms and legs. This is like the mad scientist slicing off those inefficient fleshy limbs, but not bothering to attach any cool cybernetic replacements. However inefficient they may be, our organic limbs are better than nothing. Just as our inefficient emotion based judgment is better than nothing.
I’ve got a bit of a switch. Of course, I can never go full-on one way or the other, but I can feel the difference when I choose to switch my evaluation of a situation one way or the other. (I doubt anyone remembers: my background is emotionally-stifled to schizoid to relatively normal*.)
I think an important question is, does the person have experience being an emotional being before turning into a logical one? Because emotions do have a sort of logic to them, in that forming attachments to people is one of the most rewarding and worthwhile things about being alive. I can imagine (and actually do experience this to a certain degree) that knowing this and having experienced it might make you choose it again. I actively choose emotions for this very reason, but of course I also experience loneliness and emotional needs. I don’t know if you’d need the emotional impetus to conclude that there was something worth experiencing there.
Actually, I think I have to take that back. When I was actively squelching emotions in favor of logic, there wasn’t any amount of logic that could convince me to switch back. Objectively speaking, emotions are completely illogical and seem to just gum up the works. What I don’t know is whether that would still be the case if I’d gotten to that point just by flicking a switch instead of it being, in a strange way, a very emotionally-driven pursuit the way it was in my case.
The question is, is curiosity an emotion? Can it ever be logical? What about wants and needs? Or are we talking emotional as in connections to other things outside of yourself (living creatures, ideals, goals, etc)? I don’t think you could survive having no emotions about yourself and your continued survival. Is fighting for survival logical, given that you’ll die anyway? Unless you were an unthinking being (which would make you not human, right?) I think you either have to care about some things or be controlled by an authority outside of yourself to make the effort to accomplish things, including basics needed for survival like eating and drinking.
One of things about TNG and Data was subtle hints that Data might be more human than he realizes, the anthropomorphizing. For example there was the episode where Data was sending a log of his daily thoughts and activities to some cyberneticists. There was a scene where IIRC Data was escorting a Vulcan ambassador and seemed to have a hunch she was a Romulan spy (which she was). Data dismissed it because “androids lack gut feelings” or something.
There was also the episode with Data, Lore, and Dr Nunian Sung when Song revealed Data was not inferior to Lore. To which Data in stunned almost excited surprise stood there repeating it, like a major inferiority complex had just been lifted.
Also Data seems very protective of and obsessed with his cat, more than is purely logical.
Further he gets curious and seems to find things unacceptable on principle. That whole “becoming human” thing was just Data not wanting to accept the limits of his programming. He described being limited to his programming as a very dire existence.
Further he tends to keep things that have sentimental value. Like a hologram of the first chick he nailed, and his medals.
In short Data does have some motivations that aren’t based on logic.
Which, IMO, is somewhat cheesy. I think his yearning is a sign of Trek writers pandering to human audiences. Kinda like the end of Terminator 2 when Arnold suddenly understands why humans cry. :rolleyes: Why ruin a perfectly good cyborg with an inexplicable desire to be sentimental and annoying?
Emotion certainly has a place. I love emotions! Holy crap, I would never give up emotions.
But like the great Sphinx of Egypt, one has to remember to let logic rule as the head, and the great beast of emotion (and libido) to flourish beneath. Makes for a great existence, if you ask me.
Well Data as a character was meant as a vehicle for reflecting on human nature, which he did quite well. In the first Trek Spock also served that function but he was more antagonistic about it. Data unlike the Terminator was never built to be a sourceless killing machine however, so I agree with your second point.
I’m happy to see someone understands what emotions are.
I don’t think such a switch will work. It would be logical for humans to have emotions, so you would just flip the switch back right after going to logic mode. It would probably end up as something like this. Video here, starting about 4:30 in.