Would it be justified to remove the negative aspects of human nature entirely through fantastical means?

In my ideal post scarcity utopia mental and physical disease wouldn’t exist and people would be prevented from harming others or breaking whatever laws still exist by superhuman robots that can intervene anywhere within the span of nanoseconds. Think The Culture by Iain Banks.

The issue is whether it would be justified to go even further by simply fantastically removing all the negative aspects of humanity that lead people to engage in bad behavior whether it’s as extreme as violence or as mild as rudeness, bigotry, selfishness and the other ugly parts of human nature while still leaving the person’s personality fundamentally intact (insofar as that’s possible). Would that be justified to do or far too free will violating and too much like brainwashing?

Why should we be fine with removing a part of someone’s mind that’s objectively detrimental to themselves and the people around them if it’s a clinical illness (pedophilia, depression, anxiety etc.) but not fine with doing the same to people merely because it isn’t a clinical illness but nonetheless detrimental and something most people would prefer to live in a world without?

Depends somewhat on how you define “negative”. The capacity for things like violence are there not just for offense but also for defense. You don’t want people so very nice the harshness of nature will eliminate them.

I’d be OK with tilting the balance towards less selfish, more polite, more considerate, more empathic, and more inclined towards the common and overall good than people are at present.

You haven’t defined your own terms yet. You say “justified”. Okay, so the OP says “your ideal”. So justified to you? Obviously. Justified pragmatically? Justified morally?

And your solution invites other problems. First, we’re going to, for the purposes of the debate, grant that you’re a more-or-less a moral being. But if the persons programming the superrobots and the re-education machines have a different morality, then the results are going to be quite dystopian. And who’s maintaining the robots? What if the generation after you die, the next generation decides that anyone with non-clinical illness (depression or anxiety) is still too negative and purges them?

And given the power of your super robots, they’re going to have to have AMAZING computational powers to identify and interact with rule-breakers at those speeds. That’s a great scenario for evolving an AI. And with the standards set, and all negative aspects removed, well, prepare to lay down and die.

So, we may well end up with a utopia of AIs, but not likely for humans. :slight_smile:

Yeah, this is the Alignment Problem - made popular in the context of AI, but related to ages-old philosophical questions about ethics.

We cannot precisely and completely describe the utopia we think we want.

Sounds like the Humanoids by Jack Williamson.
Accurately described as a dystopia.

Are we fine with that? I assume most people would consent to removing the listed things, since they are generally detrimental to the individual. But I know there are people with autism, and likely other conditions too, who don’t want a cure. Would you do it even if they don’t consent?

AFAIK what counts as clinical illness is pretty arbitrary anyway - it’s usually defined as something that significantly impacts your life, but people have different ideas of that. So yeah, it wouldn’t make much sense to remove diagnosed illnesses but refuse to treat anything that wasn’t quite serious enough to be diagnosed.

Psychologists often use a five factor model to define/measure personality, where these factors are Openness, Conscientious, Extraversion, Agreeableness, and Neuroticism. Your ‘negative aspects of humanity’ can be mapped to this: rudeness and selfishness = disagreeableness, bigotry = lack of openness, anxiety and depression = neuroticism, laziness = lack of conscientiousness. I don’t think you can change these without changing people’s personality. If you choose to give them the optimum combination of traits for their pampered-by-robots life, everyone is going to have the same personality.

You’ve also got the more basic problem every utopia has, that the human reward system doesn’t work that way. What brings people the most satisfaction is achieving something, and that both implies things weren’t already perfect, and requires the chance of failure. So before even getting to giving everyone the same personality, you’re going to need a radical change in what makes us happy.

In reference to rudeness, bigotry etc. is this necessarily a bad thing?

If a bad trait like rudeness (which negatively affects other people by definition) is an integral part of someone’s personality then that seems like all the more reason to get rid of it. It isn’t like you have an intrinsic right to be rude to people and there are traits like rudeness that you shouldn’t want to be the kind of the person that has them or acts that way.

Accidentally posted too soon.

Some people deserve being rude to.

I don’t necessarily consider rudeness a negative characteristic. It can be an aspect of defiance, of rebellion, of creativity, of honesty, and of a desire to change society for the better. I don’t want to live in a world where rudeness is forbidden, and anyone who thinks otherwise can go fuck themselves.

I think it’s inevitable. I think in the next few centuries we will understand the human genome well enough, including being able to know secondary effects of changing a gene, that in the first instance we’ll routinely correct obvious congenital diseases.

But it won’t stop there.
Because, I don’t think there’ll be a clear dividing line between correcting disease and improving humans.

Now, on psychology specifically, I think that we will tread most carefully here. But still, if there’s a gene that is only held by 1% of the population, and gives a 90% chance of being arrested for a violent crime at some point in the person’s life, and we find, I dunno, that the reason is because this gene means the person’s brain lacks a particular kind of receptor in the amygdala…won’t we see that as a pathology we can fix? But then where do we draw the line?

I’m not claiming to have the answers myself. I think it’s a complex thing for future humans to wrestle with, and a lot of different ways it can be written in sci-fi.

I think that the final destination down that path is the effective lobotomization of humanity.

Nonetheless, I’m pretty sure it’s just a matter of time.

Also, on the lobotimization thing, we could see it as addition; e.g. in the example I gave, we could be giving someone who lacks empathy the ability to feel empathy. Or someone who can never concentrate on anything because violent thoughts are all-consuming, the ability to have a diverse set of feelings.

Indeed, in the hypothetical world where we understand the brain and related genes much better than now, we may arrive at ways of making the human experience much richer for all; not just better vision, say, but I mean richer emotionally.
…it’s a pretty hard thing for us to conceive of, so I don’t think I’ve ever seen the latter attempted in sci fi. The closest you get, is a depiction of someone feeling too much of a sense of humor, say, and so laughing ecstatically at everything. But how about completely new psychological states?
Anyway, I’ll stop there as I’m definitely in hijack territory.

We evolved those traits for a reason. If by bigotry you mean things like racism and homophobia, it’s a manifestation of fear of strangers/anyone different. Bigotry is the result of those fears being irrational/not well founded, but I don’t think it’s even theoretically possible to program people to only have rational fears. So maybe you remove this fear altogether, but if your utopia ever encounters hostile aliens, you’re going to really regret taking away the ‘fear of strangers’ trait. Perhaps your AI robots can handle the aliens, but that means they have to retain some of these ‘bad’ personality traits themselves. How much can you trust their judgement of what is best for humans, assuming they even care?

Concern about AI alignment is itself a manifestation of this trait, and I don’t believe that’s irrational in principle, even if some of the fears seem pretty far fetched.

As for rudeness, other people have already said a lot of what I was going to. Rudeness is in the eye of the beholder, and you can’t pre-program social norms, so how can you ever completely eliminate it? You can give people a strong drive not to hurt others’ feelings, but sometimes it’s necessary to do so, whether because you need to stand up for yourself, or to tell that a truth that will make another person uncomfortable. As @Alessan said, without this trait you will never change society for the better.

I think we’re going to run into exactly the problem of eliminating ‘bad’ traits, and then realising that we actually needed a certain amount of them. I can just imagine doctors discarding embryos with too many autism genes, and then 20 years later there’s a massive shortage of students wanting to study engineering. :sweat_smile: Or they get rid of genes for aggression, and then another country declares war on you. Or eliminate genes for excessive risk taking, and now no one is starting new companies or taking a chance investing in new technologies.

We’ve evolved this way for a reason: these traits have helped us survive and succeed. By removing them we’ll evolve them again, or new ones we cannot comprehend yet, or we’ll die off. We are not machines.

And if we were, the OP would be suggesting a hardware solution for a software problem.

I should be clear that my prognosticating is based on massive scientific developments that would make everything we’ve learned about neuropsychology to date look like a child’s finger painting.
It’s true that today if we were to change a gene that was related to psychology in any way, then we’d probably cause several unanticipated effects for the individual, and society, perhaps some catastrophic.
However I also believe it’s true that human psychology is significantly flawed, again both for the individual and society. I’ve been unhappy for much more of my life than I’ve been happy, and I’m far from alone. It’s easy for me to see we aren’t optimal, and this life isn’t optimal. Not even close.

At some future time perhaps humans will possess the knowledge to make changes that work for the better.

Iain Banks’ The Culture seems like a dystopian future where humans are essentially kept as pets by their powerful, god-like AIs. A guilded cage is still a cage.

Star Trek dealt with this in The Enemy Within. Kirk is split into two beings with one an intemperate bastard and the other a milquetoast wastrel. Heck, Dr. Jekyll worked very hard to remove all the negative traits he had and Mr. Hyde was born. Usually in these kinds of stories the lesson we learn is that we need all our traits to be a complete person. It might sound good to eliminate aggression, but do you think that might have any negative effect at all down the road?

Define “rudeness”.

Some people think it’s rude to wear your shoes in the house. Other people think it’s rude to take them off.

Some people think it’s rude to say “piss” or “damn” in public. Other people think it’s rude to tell them they can’t say what they want.

Some people think it’s rude for young people to ever challenge their elders. And/or for workers to ever challenge their boss. And/or for anybody to ever challenge their preacher.

Agreed. And I think it’s well worth wrestling with in fiction and in theory before we’re able (if we ever really are) to do so in practice.

Not sure it’ll make much difference, though. All the science fiction I read about virtual reality before we started getting close to being able to produce it was horror/warning stories. Advances toward the real thing seem only to have changed that attitude.

What I would like to do, if we could, would be to eliminate the ability to enjoy causing pain or fear in another human/sensate creature. Not the ability to cause it – doctors and veterinarians among others need to be able to do so; among lots of others, actually – the child or the cat doesn’t want a shot but may need one, even though it hurts and it scares them, so anybody in charge of either one needs to be able to cause that pain and fear. And not the ability to take pleasure in being able to do one’s job well, even if that sometimes involves inflicting pain or fear, because otherwise we won’t get any doctors or veterinarians. Only the ability to take pleasure in the pain or fear itself.

Some people already don’t have this, and many of them are still able to be rude when it’s useful, or aggressive when it’s necessary. So I don’t think it’s essential for being an effective human, even when defense is necessary.

But it would end bullying. It would end rape, which would drastically upend human society, for the better. It would end some forms of animal abuse, though not the ones that occur through ignorance. It might even end voting for [fill-in-the-blank] specifically because it’ll hurt other people, even when that’s only a subconscious part of the reason.

Whether we’ll ever be able to understand the inside of our heads well enough to pull that off, I don’t know. Whether even if be become able to the people in charge at the time will want to do that is yet another question.

I wonder how many people have this? I guess many if not most could enjoy the suffering of someone they felt deserved it, eg a paedophile or murderer. Still, it’s not required to ensure justice is done.

It is reasonable to think humanity would be better off without this trait, but what about, uh, kink?

Would it? Seems to me these only require the ability to not care about causing pain or fear in some other person, it’s not necessary to actively enjoy it.

… in a physical and social environment that bears darn near zero resemblance to where we now all live. With the exception of the Earth still being a Class M planet, darn near nothing relevant to proto-humans living in small bands is still adaptive today, and much of what did deliver us out of the trees and into walking upright is downright maladaptive in the 21st Century.

And that will happen whether we remove anything or not. We’re evolving now. The issue of course is that our social environment can change yugely in a couple decades, while general human nature in the bulk can change only on the scale of 10s of millennia. That speed mismatch is part of the problem.