Please don’t throw the baby out with the bathwater here because important issues like effective altruism and existential risk were subverted and tainted by crypto bros.
The most prominent part of the early movement involved the inestimable Peter Singer presenting moral arguments for why people in weatlhy nations should (a) give a lot more and (b) give in a manner that saves the most lives for the $. This does not mean calculating discount rates for trillions of humans who have spread across the galaxy in the distant future, it means things like prioritizing basic medical care and mosquito nets for impoverished children in your charitable giving.
And it’s sensible to be concerned and prudent about the potential consequences of superintelligent AI. It’s not as though prudence in such matters entails setting up Ponzi schemes or redirecting billions of dollars that would otherwise reach deserving causes, there is no connection whatsoever.
Please listen to serious thinkers like Peter Singer and Nick Bostrom on these issues, not crypto bros and their critics.
Uh, when I looked at articles about why some do see the WEF and DAVOS as stupid, I encounter conservative opinion pieces that talk about how some people accuse them of being a cabal of paedofiles and woke rich guys… well, they claim that they are only talking about “what people are saying” about the pedophile thing, but like this guy they still accuse them of being woke, as if that was not a slur now and how there is little substance on the criticisms from the right wing when groups like Davos just dare to take the less well to do into account.
The problem with ‘existential risk’ as a motivator is that there are a LOT of existential risks, and you can justify all kinds of behaviour by just selecting the risks you want to ‘mitigate’. Hitler was a longtermist, seeking a 1,000 year Reich free from the ‘existential risk’ of being taken over by ‘mongrel people’. Mao was a longtermist who justified the slaughter of millions to pave the way for a better future China.
One of the biggest societal risks we probably face is a Carrington-level solar storm. Estimartes are that the odds of one are about 12% per century - pretty high in terms of such risks. The result could be a cratering of the entire global energy and computing systems, which would crater the global economy and food chains and could kill billions. How much money is being spent to prevent that, as opposed to climate change or other currently popular issues?
They also don’t seem to care about the existential risk of centralization. Putting power in the hands of a global government or powerful global agencies, or getting everyone to move in lockstep to global rules inserts a huge amount of risk of global disaster the first time something goes wrong. Complex natural systems require diversity and autonomy to survive, as do human systems. Countries evolved for a reason. It’s better to have 100 countries each experimenting with their own ways to thrive than to impose a solution on everyone, which ‘longtermists’ seem to love.
We are also busy converting a very diverse energy mix into a narrow range of interconnected sources, which is not good for existential risk. In the future, a power failure could mean no electricity, no heat, no emergency services and no way to travel out of a disaster area.
So it really depends on the risks you want to focus on, which generally intersects with the personal, financial and political advantages that accrue to the people hyping a particular risk and pushing their solutions.
Well, this article in Astronomy points at the risks to be not as huge as some report, and that there are already money spent to deal with things like that, ever since 1989.
A large coronal mass ejection recently struck Earth in March 1989, and the resulting geomagnetic storm caused serious havoc on Earth. The flare knocked out the power grids in Quebec and parts of New England, as the utility company Hydro-Quebec was down for nine hours. Power transformers even melted due to an overloading of electricity in the grid.
Safety measures
That 1989 event finally got the attention of infrastructure planners. “Those are the kinds of things that we have really learned our lesson from,” Halford says. Power companies began building safety measures, such as tripwires, into the electricity grid to stop cascading failure. If power increases too quickly, these tripwires are programmed to switch off so that damage is limited and transformers don’t burn out as they did in 1989.
So, dealing with climate change is still popular and there is a bigger need to deal with because next to nothing was done about that since the 80’s when the issue was noted already.
To be fair Asimov’s and Herbert’s empires were set far enough in the future that the setting presumed that technological innovation had come nearly to an end simply because everything had already been invented long ago. So The Mule was a more likely Black Swan than somebody suddenly inventing a new superweapon after tens of thousands of years of stasis.
I think we have, and likely always will, keep circling back to the human factor. Nearly any belief system (to use the Cecil’s language) is prone to be used as an excuse for a human being to act in accordance with their own beliefs/desires/advantage. It can be deist, economic, philosophical, secular or what have you, but with few if any exceptions, are vulnerable to the squishy flesh bag espousing them.
Leaving out the possible future machine superintellect for a another thread.
That said, longtermism is likely no better and no worse than any other such current belief system. It has the advantage of at least looking to a possibly better future, and an acknowledgement that with wealth comes responsibility, so it probably has more merit than the absolute worst moral philosophies out there, leading credence to Cecil’s dismissing of claims of “the most dangerous.”
It may well be in the running though for “most arrogant” for the reasons myself and other posters have mentioned however. Many other belief systems, generally older and more developed, have guidelines and dogma that serve to mitigate their overreach (imperfectly, but still) - but with longtermism’s inherent dependency on intent and self-guidance, it is endlessly easy to fall into the old Star Trek trope of the “needs of the many outweigh the needs of the few” or more specifically “the needs of the future outweigh the needs of the now or near future.”
I hadn’t heard of longtermism before today; so thanks, Cecil, for bringing it to our attention.
Seems to me that one potential danger of it is having people confuse this type of nonsense with reasonable long-term thinking, and writing both of them off as useless; when we badly need a lot more of the latter.
‘Risking the destruction of the planet right now is a good idea if it increases the hope of having more humans in the galaxy as a whole a billion years from now’ is nonsense in multiple fashions. ‘We can’t afford to spend $X on decreasing global warming even if not spending it drastically increases our chances of having to spend $X x 200 relocating all the coastal cities and settlements on the planet plus a lot of people and cultures will die anyway’ is nonsense in a different fashion. Encouraging people, by similar naming, to confuse the long-term thinking that’s needed with the “longtermism” version – I don’t know whether that’s deliberate, but it’s not good.
There’s a good deal to be said for the idea of checking whether a given attempt to provide benefit is actually providing benefits; what those benefits are; and what the side effects of those benefits are. And there’s a good deal to be said for checking whether the side effects are doing more damage than the good, or are likely to do so in the readily foreseeable future.
The key to that, however, is in actually checking: checking what’s actually happening, now, to and with actual people. And re-checking, in case the situation has changed. There’s no way to check what, if anything, the human species will be like in a million years. There’s no way to check what any particular civilization will be like in a thousand years; or even a hundred.
Again, the name appears to be being applied to something that’s intended to interfere with anything that could reasonably be described by that name.
We are also busy converting a very diverse energy mix into a narrow range of interconnected sources, which is not good for existential risk. In the future, a power failure could mean no electricity, no heat, no emergency services and no way to travel out of a disaster area.
Are we though? We are fazing in solar and wind, and fazing coal and oil. Plus, solar and wind can be more distributed. You can’t install a coal plant in your back yard, but you can install solar panels and a wind mill. Grid storage is the big bottle neck, but for the price of a nice car, I can make my house energy independent for the next 15-20 years.
Weirdly, longtermism and apocalypticism can result in similar behaviors. Both basically give the believer no reason to help anyone else in the present or near future. Suffering of others today is either meaningless or necessary, and robbing cheating and stealing is justified or irrelevant.
OK, I’m here. (Of late, I’ve had a light presence in “Café Society”.)
I certainly don’t have any special expertise, and most of what I might say about Foundation has already been said. I’d point out, though, that the tragedy of Bel Riose, the most obvious analogue Foundation has to real history, applies obviously to Belisarius, but also to Stilicho. It seems almost as though there was an inexorable Law of History acting on late Western Rome. (On the other hand, outside of Asimov, I have a natural revulsion to Inexorable Laws of History in general.)
I pretty much invented utilitarianism on my own in high school. I also rejected it after a few weeks as ungrounded. (I’ve long wondered since then whether it was altogether a coincidence that utilitarianism turned up at about the same time that integral calculus stuck its nose out into the general noösphere.)
I’ve not been mucking about with popular philosophy for a good long time, and never even heard of “longtermism”, but is it really very distant from the old end/means quarrel? And has that ever allowed of a general solution?
Oh, and don’t call the “prosperity gospel” cult “Christian”. They wouldn’t recognize Jesus if He were to kick them every step of the way from Eastport to Coronado.
The most succinct description I’ve seen of “prosperity gospel” is that it treats Satan’s attempt to tempt Jesus in the desert as a model to be emulated.
It’s hard to be opposed to utilitarianism, until you start asking difficult questions about who decides what the greatest good is.
Certainly almost all the Campbell writers were quite certain that they knew what the greatest good was, and all the events of what Luce called “the American Century” backed them up for a surprisingly long time. (Unless they read the black press, which they never did.)
I don’t believe in cycles of history, unlike Toynbee, who Asimov doted on and who, according to the Britannica, " …examined the rise and fall of 26 civilisations in the course of human history, and he concluded that they rose by responding successfully to challenges under the leadership of creative minorities composed of elite leaders." Sound familiar?
I do see that certain topics, to use a broad word, rise to prominence over and over, dominating thought in but in markedly different ways. Call it the spiral theory of history. Bookending WWII, the intellectual world thought deeply about what directions the future of humanity would take, what political ideologies would prevail, and what technology would do to human nature.
Those questions have surfaced again as a major topic, albeit in a world without what used to be called public intellectuals. I view them with a jaundiced eye so I don’t miss their disappearance. Without the broad public and journalistic adoration, it makes it much easier to look at longtermers and dismiss them as chuckleheads.
When I was a social work student, the teacher described a group of caring concerned people coming upon a river, and seeing a drowning toddler float by, gasping. Someone jumps in, swims hard, manages to snag the kid and begin pulling the child back to the safety of the shore. But then another young child comes into view in the same predicament, so another person jumps in to rescue that one. As the third kid appears from upriver, choking and flailing, one more from the group moves to dive in, but meanwhile another person from the original group begins striding along the bank, walking fast upstream, determined to find whoever-the-fuck is throwing little kids into the river to drown and put a stop to it.
Neither approach is “wrong”. By the time the latter person finds the culprit, dozens of kids may have drowned if no one jumps in to save them. But if nobody thinks ahead about how to actually stop the problem, we get stuck in a perpetual-rescue loop, and some kids may drown before they get carried down this far. Or we may miss some.
I appreciate the discussion, and cannot say I knew much about the topic. It’s a well written column which interested me.
However, the vocabulary used and philosophical basis of the article already presume a fair level of educational curiosity. I think many already at the SDMB will love the article and appreciate the insights. However, I am unsure if Teeming Millions can be distracted from their blinky phones long enough to read long words.
This reminds me of a recent discussion elsewhere on the board, where someone asked if pain that was experienced a century ago matters today. If one were to accept that it does not, it stands to reason that pain and suffering experienced today won’t matter in a century. From there, you consider that you may be able to trace the pleasure you currently feel to the hard work and suffering of people a century ago, toiling away to build the technology and infrastructure that led to the world we live in now. To me, this is the fundamental truth of longtermism, and the shortcomings of this pattern of thought have already been pretty thoroughly addressed. Long story short, if your moral philosophy allows you to discount any suffering and injustice because in the end it will be dead and forgotten, then you’re just a monster.
The funny thing is, all these rich assholes who cling to Effective Altruism built themselves a clever loophole. Ideally, they should be dedicating themselves to a life of monastic suffering along with every other living human, because ultimately our experience will not matter, only the impact we leave on the future. Sure, by all means, “Earn to give”, but that doesn’t mean “earn a billion dollars and give ten million of it away to a nice charity.” It means every cent you spend on yourself beyond the barest necessary to keep you upright and breathing is a waste. I very much doubt many of the biggest and richest proponents of longtermism are ready to to accept this fact, and have no end of excuses for why their personal pleasure today is more important than the pleasure of the quadrillions of future humans that dance in their mind.
I see a number of problems with this philosophy, at least as presented here. First, OK, so there are some potential threats that really could wipe out the entire population. I’ll grant that, and also grant that it’s worth making plans to stop those things. So, what’s your plan? If you’re arguing that we should be stopping those things instead of stopping global warming, then tell us what those things are. And if you tell us that your plan is for all of us to give you lots and lots of money so you can stop those things, then we still need to know what you’re going to do to stop them.
Second, if one of the things you’re worried about is robots replacing humans, well, what’s so bad about that? You would deny untold trillions of future robots blissful lives? You monster! If robots do replace humans, then it’s the good of robots we should be worrying about, not the good of humans.
Third, I do have a plan for dealing with all of those potential population-ending catastrophes. All of them, including the ones that neither you nor I nor anyone else has yet anticipated. My plan is to give everyone currently on the planet the opportunity for education and the standard of living to allow for time to think, so that we’ll have the maximum possible number of well-equipped thinkers to try to anticipate the catastrophes, and to come up with ways to solve them once they become apparent. And the very first step of this plan is that we need to save all of those people from global warming and all of the other current threats.
Meaningless philosophical bullshit so uber-wealthy dipshits can pontificate on how they think they are making the future world better in abstract when in reality all they are doing is creating Ponzi schemes and Tulip manias so they can drive Maclarens and sail on mega-yachts now.
If these people were really interested in “altruism”, the would be looking to invent something that solves real world problems for regular people.