I’ve read Thomas More’s book Utopia and I’ve always pondered over the notion of a utopian society. It seems to me that every political ideology and every value system and every philosophical world view is trying to achieve some level of utopia or higher existence for those in a mortal state.
What would a perfect society look like? What country would be the closest thing to a utopia?
But yet despite our attempts to reach utopia it seems like we can never achieve it because of external factors ranging from human nature (inherent evil or inclination towards evil) to political factors (democracy, authoritarianism, etc). But at the same time on a higher level utopia is entirely subjective because your version of utopia could be utterly different from the next persons version of utopia meaning that with such differences given world population it would be statistically impossible to reach a utopia that is agreeable to all people because all people would disagree on the idea of utopia.
Why do you think the goal of every value system is utopia? Lots of political philosophies are explicitly anti-utopian. Democracy for instance isn’t utopian, the people in their wisdom don’t always make the right choices. It’s just that democracy sometimes prevents the abuses and indignities of autocracy or oligarchy. Same thing with constitutional government. Instead of a blueprint for utopia, things like the bill of rights are just lists of shit we know from history lead to bad outcomes, so let’s try not to do so much of shit like that.
Because every value system and every philosophy is trying to attain perfection at some level. It goes completely against human nature to want to fail, value systems in their own way regardless of how it interprets utopia wants to attain perfection somehow. Whether that perfection is found in wealth or poverty or democracy or authoritarianism is entirely subjective.
Utopia is inherently perfection at some level, if you look at the premise set forward by Thomas More in his work it is designing the “perfect society”. So if we take perfection we can see that every system that has ever been in place has attempted to create some level of perfection if not for a wide scale than for the individual that is trying to push that value system, philosophical ideology, political change, etc all with the aim of improving the human condition either on a group or individual level.
So with that said is it possible to attain Utopia or will we always fail because we all want different things?
We all want different things, and none of us can agree on the definition of “a perfect society”. It would be easier to design the perfect shoes that both pleased and fit all people.
There’s a difference between not wanting to fail, and trying to attain perfection. If I run a 10k race next week my goal is finishing it, not attaining perfection. I would only count it as a failure if I didn’t finish, I would not count it as a failure if I didn’t win.
So you’re just wrong that every value system is trying to attain perfection, unless you redefine perfection to mean “not failing utterly”.
“Maximum freedom, minimum harm” is a tough balancing act just using my own definitions. Balancing everyone’s possibly contradictory standards is a “wicked problem”. I think most people understand that, and have lowered their goals to dynamically managing ongoing problems in a fair way rather than solving societal problems for good. In short, I think very few people believe utopia is even possible, let alone something they’re actively striving for.
I agree with the robots in the matrix, I don’t think utopia is possible because we humans define our reality through negativity. Our brains have a negativity bias (bad things affect us more than good things) and we are always going to be competing for status, mating potential, etc. and have to deal with all the negatives that come along with that.
True utopianism will require rewiring the human brain, which is probably 100 years off. Rewire it to remove neuroticism and greatly amplify the reward mechanisms while getting rid of the negative feedback mechanisms in the reward centers, etc. Our brains are wired to give us a survival advantage in hunter gatherer societies, they need to be rewired for quality of life and living in a technocracy.
The closest things to utopian societies seem to be Scandinavian ones. They seem like they try to become as egalitarian, humane, just as possible but the people there aren’t in utopia by any means.
Skinner’s Walden II is a slightly more modernized utopia. Fascinating book. It’s a fun challenge to read with an eye toward rebutting. (Much like Atlas Shrugged: you know there are flaws, but it’s an exercise in critical thinking to zero in on them.)
By many standards, modern western civilization is utopia. We really have solved a hell of a lot of problems. I’m currently reading a book about the Thirty Years War. Not all that long ago, and the German States were involved in total scorched-earth war with each other. Things are a lot better today.
I’m a liberal. I want to make things better, to improve people’s lives and to make progress towards a more free and more just society. But I’m very, very, skeptical of utopianism of any kind or any proposal to achieve perfection. Once you start seriously talking about making things perfect there’s a very solid chance that large numbers of people are going to die.
That’s simply not true. Just two name two: utilitarianism seeks the greatest happiness for the greatest number, not perfect everything for everyone; and Protestant Christianity teaches that humans are inherently flawed and sinful, and only capable of redemption through God.
The only way around this issue would be changing us, i.e. transhumanism.
Hypothetical time: Humans are hooked up to the matrix, which constructs for each an artificial world tailored precisely to their psychology, and provides them with challenges of a type they want. We’ll assume that they do not know their reality is a artificial construct.
In one hypothetical, the computer evaluates their ability and performance and tailors each challenge so that success is guaranteed, failure is impossible, and death cannot occur. To the participant, it seems that they are succeeding on their own merits, but this is an illusion.
In the second scenario, the computer provides challenges that the person should be able to meet, but does not guarantee success, failure (and the physical and emotional pain as a result) is possible, and death may occur.
A representative of Renaissance humanism (a sort of avant-la-lettre Enlightenment), Thomas Moore was familiar to ancient Greek and Roman philosophy.
The first known utopia is Plato’s Republic, which aims to show NOT the “most perfect/ideal” society that the human mind can conceive, but the most effective/successful social system that can actually be put into practice (an efficacy that has been proven by certain religious societies) – Thomas Moore knew Plato’s work and his approach was a practical one too.
Thomas Moore wrote his Utopia so that he could stir a public discussion by means of which people should contribute to social change and improve the political, social and economic environment so that the world would become a “good place” (not a perfect one).
Even by today’s standards, a utopia does not claim to devise or depict a perfect world; instead, a utopian society is characterized by highly desirable attributes and near perfect qualities.
The role of utopianism in politics stems from people’s beliefs that the world is an improvable place and people can better the society they live in – in this respect, a utopia is like a limit to infinity: it functions as a distant beacon that gives people a sense of progress and direction.
Lovely question! The functional answer is that both are utopia – a world where everything is right. There aren’t any wars between humans: they’re all in pods. There are no shortages, no ideological strife, no conflict of any sort. Absolute utopia, the both of 'em.
Which is subjectively better? Something in between. Death should not be possible, but short-term failure and frustration of goals is good, as it forces the mind to grow and adapt. Our minds need challenges, and the only way to calibrate a challenge is to fail once in a while. Now, this fool’s paradise will probably be less frustrating than our real world. For it to be idyllic, we don’t want to induce psychoneuroses by repeated exposure to stress. We don’t want heart attacks and addictive behavior and despair. Just a few setbacks to force the mind to work.
That reminds me of the Twilight Zone episode “A Nice Place to Visit.” A deceased gambler thinks he’s in heaven because he keeps winning every game he plays. He grows bored and asks to visit Hell. Then he’s informed he is in Hell.
The science of happiness is pretty interesting. People in poor societies often score better on happiness metrics than those in rich ones. They have to band together and appreciate the little things in life.
I think a real utopia would require serious bio-engineering. Make us something like eusocial insects, happy to work and conform for the greater good. Maybe something like a Brave New World.
I think the guy in the second scenario is probably happier. In order to be happy, you need a frame of reference. You need to know what bad feels like and you need to know what normal feels like before you can be happy.
If the programmers of the Matrix wrote in a “no failure” code for the first guy, “succeeding” will be no more of a rush to him than you average adult epiriences successfully climbing a flight of stairs.
OTOH, the guy who knows failure is going to be ecstatic when he succeeds because he knows how bad it sucks to NOT succeed.
That is what I thought of too, the twilight zone episode, but he knew he couldn’t lose.
Has anyone done legit research into neuroengineering to create utopian (or at least more emotionally healthy) humans? I know we don’t have the tech now but has anyone gotten an idea of what it would take?