Problems are easy; solutions are hard.

I’m trying to take a look at politics from a different direction than usual, identifying a particular problem and trying to figure out what political actions would have the best chances of solving it. But I’m not nearly as smart as I like to think I am, so I’m posting here in hopes of garnering at least one idea that I wouldn’t have come up with myself.
Basic ethical standard: A universe in which sapience exists is qualitatively and quantitatively better than a universe without.

Assumption for this analysis: As best we can test, there’s no supernatural or alternate dimensions, no afterlife, and no life off Earth, let alone intelligence. This planet, and the people on it, are /IT/.

Problem: An unpredictable extinction event could happen on Earth at any time.

Basic solution: Spread our population offworld.
Sub-solution: Since an extinction event could happen at /any/ time, the sooner we can create self-supporting colonies elsewhere, the more likely sapience will survive.
Sub-problem: We don’t currently know /how/ to build self-supporting off-Earth colonies.

Solution: Get more people doing more science in more fields faster.

Sub-problem: Some countries’ political systems seem to produce less research than others.

Analysis: Those countries which produce the most research seem to have certain overall trends in common: liberal democracies which respect their citizens’ rights, and allow individuals the freedom to seek out new economic niches and exploit them, do better than oligarchic or totalitarian regimes which do not respect rights and attempt centralized economic control.

Sub-problem: There are a great number of oligarchs and would-be oligarchs who are more interested in exerting power over others than in allowing a pro-science political system that would reduce their power to be put into place.
Sub-problem: There are also a large number of countries whose political system (if any) is so unstable, with one group of warlords fighting another, that the citizens have very few economic opportunities to become wealthy enough to support scientists who are studying things that offer no short-term benefit.

And so, that’s the basic set of problems I’m looking at.
Given that most politicians who get elected put great value on both getting in power and staying in power, then my first draft of a set of a framework of solutions goes something like this:

To preserve sapience in the universe

  • by spreading it off Earth
    – by learning how to live in self-sufficient off-Earth colonies (as quickly as possible)
    — by maximizing the amount of basic scientific research that’s being done, by as many people in as many fields as possible
    ---- by ensuring that as many people as possible have the freedom to explore new ideas, and that there is sufficient economic surplus to support basic research that offers no short-term benefit
    ----- by convincing the politicians in power that it’s in their own best interests to enact policies that are as pro-science as possible (including policies that help science’s “fellow traveller” ideas, such as freedom of thought, of speech, etc)
    ------ by convincing them that a failure to enact such policies will lead to others being elected to replace them
    ------- a) by making a credible threat that candidates who /do/ promise to enact such policies will /be/ elected
    -------- by tying the basic pro-science platform with ideas that as many voters as possible want to vote for
    ------- b) ensuring that candidates who make pro-science promises but fail to follow through will have that failure widely reported on
    -------- by ensuring that it’s possible for whistleblowers to report on politicians’ failures without significant harm to themselves
    --------- i) by widely reporting on politicians who renege, and convincing voters to vote out such people
    --------- ii) and by promoting peoples’ right to say things anonymously, even when anti-anonymity advocates have good reasons to restrict anonymity (or just excuses such as “to reduce crime” or “to protect the children”)
    ------- c) by ensuring that whatever single party, or duopoly of parties, is in power has to face at least one serious third-party contender that is willing and able to take power for themselves
    -------- I) by supporting any pro-science third parties, and working against any anti-science parties in power
    -------- II) by arranging the local election system to minimize the systemic tendency for a single-party or duopoly in power to become entrenched and remain in power
    --------- initially by using legal means to, eg, replace “first past the post” voting systems with “preferential vote” systems that allow for a more robust multi-party system
    ---------- by maintaining a credible threat that a political system which fails to respect the will of the voters will be overturned in a revolution by one which does
    ----------- by ensuring that the people as a whole have the ability /to/ overthrow the local political power structure
    ------------ A) by ensuring that the citizenry is armed at least as well as the government-controlled armed forces
    ------------ B) by ensuring that the citizenry is sufficiently aware of the revolutionary option to seriously consider it in cases of entrenched political power
    ------------ C) by making plans so that a post-revolutionary government will have as many of the foundations for a pro-science democracy as possible (eg, maximizing citizens’ literacy, rationality, knowledge of civics, access to information, and willingness to punish errant politicians)
    ------------ D) and by doing various other preparations for potentially engaging in revolution which are best not discussed in public fora.
    … so, what suggestions can you make to improve any of that?

It seems to me you’re working backwards here – starting with a solution (sprteading our population offworld) and then trying to develop a governmental system to bring about that solution.

However, this is the hallmark of a paternalistic society. I suggest you familiarize yourself with the works of John Locke, who maintained that a just society derives itself from the consent of those it governs.

The idea of fomenting revolution to overthrow “entrenched political power” simply substitutes one oligarchy (people with guns who agree with you) for another (people with guns who don’t agree with you.)

And what happens if the foundations of your pro-science democracy – “(eg, maximizing citizens’ literacy, rationality, knowledge of civics, access to information, and willingness to punish errant politicians)” results in a society that decides investing vast amounts of resources to colonize space to defend against a threat which could (or could not) happen at any time is an irrational use of time an money which might better be used to combat the all-too-immediate problems of famine, disease, poverty etc.?

Will you attempt to overthrow governments until you finally kill or intimdate enough of those who disagree with you that you get your space colonization program? And is that the kind of philosophy you want to export to colonies on other planets?

Not quite; I’ve identified a particular solution for one large set of extinction-level events, and am trying to figure out what political situation would bring about that solution, and any other solutions that can be come up with for any other ELEs. Just about any such solutions will require a bunch of scientific research to be done in a variety of fields; so I’m looking for the general principles for what political systems allow for as much scientific research as possible to be done, and what tactics help encourage such systems to come into existence.

From what I’ve been able to see, the less that a government interferes in ordinary citizens’ lives - the fewer rights that are infringed, and the more individuals are allowed to exploit whatever economic niches they can find - the more scientific research gets done there. While correlation isn’t causation, I can construct a plausible theory in which the lack of governmental interference is what allows for more research, and so I’m trying to figure out what it would take to reduce governmental interference on individuals.

Rationally, there are only so many resources that can be thrown at Earthbound problems before they’re solved, or before it becomes rationally inescapable that other methods are required to solve them. Such as, say, implementing an orbital construction infrastructure to build solar-power satellites, which becomes even cheaper once a lunar or captured-asteroidal mining project is implemented, which becomes ever cheaper as tricks are come up with to reduce the amount of supplies that have to be shipped from Earth…

Looked at another way, if everyone /else/ wants to stay on Earth, that’s fine - as long as I, and the people who think similarly, aren’t prevented from exercising our own right to walk away from the existing systems to create our own. And to do our own research to figure out just how to do that.

Huh. I thought Hugo Drax was a fictional character, but here he is on our message board.

Assuming you mean the film adaptation - I don’t plan on killing off the rest of humanity, nor would I expected to be the leader of any off-Earth group of survivors of an ELE. I’m more the lone hermit type - I put together the writeup at Orion's Arm - Encyclopedia Galactica - PackRat Spores, AKA '____' Spores while exploring one possible future lifestyle, depending on certain technological prerequisites.

So, now that we’re reading each other’s posts - do you have any suggestions about what methods might help ensure humanity’s survival, or whether that’s an ethical goal in the first place, or anything else that would provide a useful idea that hasn’t already been mentioned in this thread?

Well, I suppose we could direct all space funding towards creating off-planet habitats without ruining anything else.

Alright; if that’s the new sub-goal, then how might we accomplish that?

I don’t se anything in your posts that takes into account the likelihood of an ELE. Couldn’t people reasonably conclude that an ELE occurring within some reasonable timeframe is so remote that the best course of action right now with respect to that possibility is to do nothing?

Also, seems like people could also reasonably conclude that technology will be so much better in the future that any effort we expend now would be wasted. imagine a country that adopted these goals in the 19th century–their steam-powered spaceships (or whatever) would have needlessly diverted resources from other goals they actually could have accomplished.

Well, not necessarily. If something bad might happen, but the odds of it happening are remote, we generally take the chance that it won’t happen.

If something truly terrible might happen, we generally insure against it, even if the odds of it happening are still remote.

I suspect most people can’t think of anything worse than extinction happening to the human race.

For your previous question - RNATB gave much the same answer I would have. :slight_smile:

As for this question - we don’t know /when/ an ELE will happen. It could happen, completely unexpectedly, on, say, Dec 23 2048 AD. If our first self-sufficient colony is scheduled to launch on, say, Dec 24 2048 AD, then, well, there’s no silver medal in the survival race. But if we make choices that manage to speed up our scientific research so that we can manage that launch on Dec 22… then that would make all the difference between a future universe containing sapience and a future universe lacking it.

And yet the two nations that have made the most progress in space exploration have been the Soviet Union, an authoritarian regime, and the United States, which has a duopoly of parties in power that don’t have “to face at least one serious third-party contender that is willing and able to take power for themselves.”

Your words say “lack of intereference,” but your OP screams not just “official policy” but “massive government program.” Take a look at some of your proposals:
— by maximizing the amount of basic scientific research that’s being done, by as many people in as many fields as possible
---- by ensuring that as many people as possible have the freedom to explore new ideas, and that there is sufficient economic surplus to support basic research that offers no short-term benefit (emphasis mine)
--------- ii) and by promoting peoples’ right to say things anonymously, even when anti-anonymity advocates have good reasons to restrict anonymity (or just excuses such as “to reduce crime” or “to protect the children”) (emphasis mine)

And the rest of your list, which has less to do with scientific advancement than with some utopian idea of what society should be.

Have we reached the limit of resources that can be productively “thrown” at Earth’s problems? Who is qualified to judge when we have? And rationally, who determines when lunar or asteroidal mining projects justify their development costs, and who determines when (or even if) the problems involved in establishing off-Earth colonies can or can’t be solved.

Good news! You have that very right today, as evidenced by the private space programs that currently exist. There’s no need to overthrow exiting governments at all.

You’ve at least made a major step toward reality by acknowledging that the only way to create a future where space habitats are possible is to create a future society so wealthy that we can create space habitats for fun, like people today climb Mt Everest for fun. And the best way to create that future wealthy society is through liberal democracy, the rule of law, and capitalism.

So sure we could spend a trillion dollars this decade to send a few guys to Mars and back. But spending that amount on what would amount to a stunt will retard our future economic growth, which is what will ultimately sustain a real space habitation program.

The problem with this answer is that it proves too much. It doesn’t provide any guidance or upper bound for the amount of resources we should throw against the possible occurrence of a remote cataclysmic event.

For example, do you think we should force everyone’s standard of living to be extremely low so that we can spend all of our resources on solving the human extinction problem?

So if you were alive during the Roman Empire, you would have advocated the exact same thing you are advocating now? You would have spent tremendous resources on trying to build chariots to carry people off of the earth?

There is a difference between a government spending whatever it takes to accomplish a status-increasing project, such as “being the first to do X in space”, and investing in space-based infrastructure. The Soviet Union was, simplisticly, outspent by the US, and ended up being unable to keep up with the expenditures necessary to keep parity as a superpower; once the US no longer had a competitor, it basically stopped throwing money at new space status-symbols, leaving itself stuck with the mal-specced space transport system for thirtyish years, until China started achieving its own competing status-symbol of a manned space program.

I’m not interested in utopia - my aim, with this idea-session, is to try to identify what it would actually /take/ to speed up the ensurement of the survival of sapience in the face of fundamentally unpredictable extinction-level events, primarily by establishing off-Earth colonies. If tyranny will give the best odds of creating a future universe containing sapience rather than a future universe that lacks it, then I’ll be in favour of tyranny. If anarchy gives the best odds, then I’ll be in favour of anarchy. To tell /which/ social/political systems actually /will/ give the best odds, all I can do is examine present and historical examples, and attempt to find correlations for which systems provide the best science; which, in my analysis so far, tends to suggest that rights-respecting liberal democracies provide the best results, and so that sort of system is what should be encouraged.

Possibly we have. In which case, wouldn’t it be a nifty thing to do to encourage as much scientific research as possible in as many fields as possible, in order to try to learn currently unknown ways to solve those problems; and to encourage expansion of the economy, so that we have even /more/ resources to try to solve those problems with?

Assuming those aren’t simply rhetorical questions: whoever has the ability /to/ direct sufficient resources one way or the other. Which, at present and for the foreseeable future, generally means the people who vote for government-scale budgets.

That’s fine, so far as it goes; but simply putting together a non-governmental launch system is only one step towards having a permanent off-Earth presence, and we still have much to learn before we become able to accomplish that - so the overall problem/goal remains the same.

As I said in my previous post, by applying the basic value-measuring standard that a future universe containing sapience is effectively infinitely more valuable than a future universe lacking it, then /whatever/ methods provide the best odds of the former occurring than the latter are worth pursuing. If “forcing everyone’s standard of living to be extremely low” provides the best odds, than that’s what I’d recommend. However, as I /also/ said in my previous post, analyzing what social systems actually /do/ correlate closely with maximizing the amount of science done tends to suggest that forcing low living standards isn’t the best choice to accomplish that goal.

There are a variety of extinction events that can be prepared for. With the technology and resources available during the Roman Empire, then if, say, somebody wanted to ensure that at least some of their descendents would survive should a volcano destroy their city, then a natural solution would be to encourage some members of the family to move to other cities. In the present day, we have the technology to handle a much vaster array of disasters… and /now/, at least, we’re within shooting distance of being able to handle a planet-wide extinction-level disaster. We’re somewhat further from being able to handle a solar-system wide disaster, and even further from a galactic-level disaster. But the same general principle remains: do what you can with you can, and try to get whatever is necessary so you can do even more.

You have a really odd way of thinking about the world. Most people think about the impact of a possible event as a function of both (i) the harm that would be caused by the event and (ii) the likelihood of it happening. You seem to only take the first aspect into account.

However, I doubt you really live your life in strict accordance with the philosophy you are espousing in this thread. For example, I assume that on a personal level you would assign an infinitely negative harm to you of your own death. Wherever you are right this second, an airplane or a meteor could fall out of the sky and kill you. Why aren’t you living in a secure underground bunker (away from areas of volcanic or earthquake activity, with adequate ventilation and drainage, etc.)?

The reason people don’t, in real life, assign an infinite value to their own life is that every individual human life must end sooner or later, no matter what steps you take to preserve it.

So you can lock yourself in a vault but eventually you’ll die anyway. So people make decisions that shorten their lives or carry a risk of death, all day every day. I’ve even made this decision for my precious children, like driven them in my car when their car seats were in my wife’s car. The odds that day would be the day I’d get in wreck and they’d be killed because they were just wearing seat belts and not in a car seat are pretty small, but the risk was present.

Anyway, if you assign a large value to future space habitats, it’s a mistake to focus on our current space program. Focus on basic science and economic development instead. You’d be better off helping little girls in Nigeria learn to read than working on a manned mission to Mars.

Because at our current economic and techological level a mission to Mars would be a one-off stunt that couldn’t be repeated for decades, just like the Moon landings of the 70s. If we wanted to go back to the Moon we’d be starting over from scratch.

Have you ever heard of ‘micromorts’? On average, any given person has about 33 micromorts per day - that is, a 33 in one million chance of dying from /something/. It’s possible to get rid of some of those micromorts fairly easily - good diet, exercise, avoiding drugs and alcohol, always wearing your seatbelt, and so on. There are even ways to potentially reduce the odds that your ‘death’ will be permanently so, such as never issuing a DNR order or by buying cryonics insurance (which is a whole other thread’s worth of topic). However, nobody has infinite resources, and so only has a finite set of choices about how to reduce those last few micromorts - eg, do I spend my money on avoiding potential cause of death X by 60% or potential cause of death Y by 90%? So choices have to be made - and some standard has to be used to guide those choices.

(In case you’re wondering - I’m /already/ living in an area that’s essentially free of earthquakes, volcanos, hurricanes, wildfires, floods, tornadoes, riots, and wars, and is within a 911 call of an ambulance and hospital. :slight_smile: )

Now, if, by some magic, I had a wand I could wave and end up with two copies of my mind in two different bodies… then the odds that at least /one/ of me would survive a given day are /vastly/ greater.