Straight Dope 1/13/2023: Is longtermism the world's most dangerous belief system?

Is longtermism really the most dangerous secular belief system in the world today?

Most dangerous? Piffle. Longtermism – in its extreme form, the idea that threats like evil robots destroying humanity should get priority over hiccups like climate change – is a blessing in disguise.

I acknowledge not everybody thinks this. Philosopher Emile Torres, who has written extensively about longtermism, calls it “quite possibly the most dangerous secular belief system in the world today,” which can be used to justify the priorities of rich white people while ignoring the problems of everybody else.

OK, so longtermism has its problematic aspects. But come on, Emile. Must we dwell on the negatives?

Before getting into the nuances of this miracle philosophy, I’d better address what, for many, is likely to be a more pressing question: “Why the F should I care about longtermism? I’ve never heard of longtermism.”

Until a couple months ago, neither had I. Then it appeared in a tech mailing list I’m on, with no indication what it was, although from the context I deduced it was definitely bad. I saw it again the next day, and in seemingly no time thereafter longtermism was sprouting like zits all over the Internet. For this we can thank Samuel Bankman-Fried, a/k/a SBF, the notorious ex-CEO of the FTX financial exchange, who in addition to having built a vast cryptocurrency empire was a proponent of the philosophical movement known as effective altruism, one of the weirder offshoots of which is longtermism. By virtue of his spectacular flameout, SBF has managed to nuke all three, or at any rate materially retard their progress, which is a considerable accomplishment. OK, allegedly vaporizing at least $7 billion in other people’s money was a lot of collateral damage. But you can’t say the man hasn’t done the world any good.

You see where I’m headed with this. But first we need to explore what longtermism is all about.

The starting point is effective altruism, referred to hereinafter as EA. EA, we’re told, “aims to find the best way to help others, and put them into practice.” This sounds noble, if sappy. Never fear. The innocuous words conceal unsuspected depths.

The central problem of EA is figuring out the best way to help others. Some important EA concepts:

  • Earning to give means you strive to make a big pile of money with the intention of donating, say, 10% to some cost-effective charity. You can see where this would have a better chance of budging the needle, human improvementwise, than working in a soup kitchen. In fact, the idea inspired SBF to go into the cryptocurrency business.

  • Expected value (EV) is the net benefit of an act times the odds of it happening. This gets into some math, but the upshot is, you want the greatest good for the greatest number. In other words, the more people that benefit, not just now but in the future, the higher the EV and the greater the collective good. If humanity were to survive and multiply until the heat death of the universe and colonize the whole of the cosmos – grandiose scenarios like this often figure in EA theorizing – total EV potentially runs into the gigajillions of “blissful lives,” another common EA term.

  • An existential risk is anything that threatens the extinction of humanity. This is the worst thing that could possibly happen, in the EA view of things, since, after an existential catastrophe, EV drops from gigajillions to zip. A lot of EA people think there’s a good chance superintelligent machines will exterminate humankind, whereas climate change will wipe out the Bangladeshes of the world but spare affluent folk like us who can take precautions, and eventually the population will recover. Ergo, evil robots are worse than climate change and are what we should really be worrying about.

Given all the above, longtermists can easily demonstrate mathematically that even the tiniest non-zero reduction in existential risk, however farfetched, outweighs all short-term do-gooder efforts, even those with a decent chance of success. In other words, never mind trying to eradicate malaria or global poverty, let’s focus on threats thousands or even millions of years down the road. That’s longtermism.

You may say: that’s nuts.

Of course it’s nuts. Longtermism is predicated on so many crackbrained assumptions it’s hard to believe it was dreamed up by adults. To cite a few:

  • Whatever Classical Utilitarians may think, there’s no good reason to think 20 billion blissful people is necessarily better than 10 billion, and given finite resources, a lesser number seems more sustainable than a larger one. The idea that there’s a moral imperative to cram the cosmos full of happy beings … get out.

  • People who think the future is knowable are kidding themselves – on the contrary, some claim, history is a long string of unpredictable black swan events. The belief that we can make meaningful predictions about the impact of current actions on the distant future isn’t science, it’s mysticism. That’s not to say we should ignore reasonably well-established threats to our descendants such as climate change or resource exhaustion. But let’s not get ridiculous. If you start out with bizarro assumptions, don’t be surprised if you get bizarro results.

  • Your existential risk may be my off-the-wall speculation. People have been fretting about superintelligent AI or its granddaddy, Frankenstein’s monster, for more than 200 years. OK, autonomous machines might present dangers that require precautions. But how hard is that? Nobody ever heard of Asimov’s Three Laws of Robotics?

Lest you think this is a purely academic debate, longtermist thinking has had real-world consequences. In a much-cited interview prior to FTX’s meltdown, economist Tyler Cowen asked SBF:

Let’s say there’s a game: 51 percent, you double the Earth out somewhere else; 49 percent, it all disappears. Would you play that game? And would you keep on playing that, double or nothing?

SBF, after some weaseling, tacitly acknowledged that he might: “Maybe you [win] an enormously valuable existence.”

No, you don’t. Double or nothing, in Cowen’s formulation, doesn’t mean you play until you win. It means you play until you lose. (This is a variant of a confounding type of game called the St. Petersburg paradox.) As even some EA proponents ruefully acknowledge, this was the strategy that led to SBF’s downfall – he kept doubling down on risky bets until he lost everything. That was bad news for some venture capitalists, cryptocurrency speculators, and other such folk. But – and here we get to my point – it had no impact on anyone else.

That’s the beauty of longtermism. It’s so manifestly crazy, and has so clearly resulted in disaster, that it’s inherently self-limiting – a danger chiefly to those who buy into it. Maybe it didn’t single-handedly tank cryptocurrency, but it has surely hastened the day, and meanwhile it has thinned the herd of billionaire tech bros, in the best Darwinian tradition. What could be bad about that?

– CECIL ADAMS

After some time off to recharge, Cecil Adams is back! The Master can answer any question. Post questions or topics for investigation in the Cecil’s Columns forum on the Straight Dope Message Board, boards.straightdope.com/.

nods

I’ve always posited that the opposite problem (which is widespread) should be characterized as short attention span. I should not be surprised that there are idiots people taking issue with a tendency to look farther down the road than today’s memes.

Right- both are really a form of flawed priority-setting and/or risk assessment, rather than “belief systems”.

I think the fact that one incompetent person got in over their head using this as a guiding philosophy does not meant that is what is always going to happen. From everything I have read, this serves the same purpose as objectivism. A way for those with a lot of wealth and power to justify, if just to themselves, that not only is okay that they have all this wealth and power, but that it is actually good, moral, and just for them to have it and to use it to acquire more. And that further, anything that prevents them from getting more wealth or power is immoral. Cheating on your taxes and scamming retirement funds is more justified than lunch counter sit-ins.

Another point, not explicitly brought up (and I know 10k words would just be a start to deal with a philosophical belief as far reaching as this one), is WHO defines the reality of the long term risks? As a nation, much less a world, we have a hard time defining if climate change is a thing (which I personally do), or even if it is, if it’s within human control (which I do, albeit with plenty of other factors), and why should I worry about it (wealth gap and Bangladesh issue mentioned in OP).

So even if Longtermism has benefits in looking to the future and not being lost in short term goals, the very establishment of such long term goals is going to problematic. Each individual in the philosophy is going to point to their own personal future boogey-person, and flail around at that issue, resulting in a possible loss of concentrated investment in short-to-medium term improvements which themselves improve long term potential.

Not to say that we couldn’t all use some long-term thinking as well. The recent fiasco with the US House of Representatives and George Santos is part of an inevitable issue of a constant 2 year cycle of never stopping the campaign. Other political and social issues are similar, in that we are so busy chasing the latest emergency or fad that we either settle for doing anything (effective or not) to be able to say we have, or washing our hands of it because nothing can be done in the short term and we want to move onto the next big thing.

Two things:

  • The time frames that longtermism address are far past anything predictable, or reasonable. Some of them think Leto Atriedes was too short sighted. For them, 10 generation of misery and poverty for 99% of humanity is a perfectly justified trade off if it results in some grand future with trillions of happy humans.

  • There is a huge amount of ego involved. To be a practicing longtermer you need to believe that you are unique and special. That you are justified in causing untold misery today, because your or your descendants will bring about this great future. Killing off any percent of humanity is ok, as long as you and your descendants survive and thrive because obviously that is what is important to the future of humanity. I have seen speculation that this is one of the reasons Elon Musk keeps having kids he doesn’t really care about.

Very key points, for those who don’t get the reference:

And @Strassia is making a good point. In the above example, you literally have a quasi-immortal who has the memories and wisdom of countless ancestors, near unlimited power and wealth as an autocrat, and the literal ability to see the future, and -still- have a hard time making painful decisions over short - medium - and long term happiness for themselves as well as the greatest good. Per longtermism, they would generally be ‘correct’ in their choices, but it literally takes that inhuman a being to be able to even partially succeed.

Another fictional example, and one which ties back to our OP is the whole Hari Seldon and
pschyohistory of the Foundation and other related books

Psychohistory depends on the idea that, while one cannot foresee the actions of a particular individual, the laws of statistics as applied to large groups of people could predict the general flow of future events. Asimov used the analogy of a gas: An observer has great difficulty in predicting the motion of a single molecule in a gas, but with the kinetic theory can predict the mass action of the gas to a high level of accuracy. Asimov applied this concept to the population of his fictional Galactic Empire, which numbered one quintillion. The character responsible for the science’s creation, Hari Seldon, established two axioms:

Which sounds at least reasonable in the aggregate, but within the context of the same stories was repeatedly derailed by the fact that some things could be predicted in terms of general trends, a single, unexpected outlier event could derail even the most careful calculations.

Which brings us back to the second point I quoted from the above poster: if you think you are one of the special persons who have access to the only truth, it’s saying a lot more about you than the philosophy.

I’m with Strassia here, to simplify it in its most simple terms. It seems like everyone who is a proponent just uses it to justify along the same lines as eugenics and uses similar defenses to what those people used. Also a lot of it is based in biology but I haven’t seen one person pushing it that has any background whatsoever in said science. There might be a doctor or two I haven’t come across but it almost exclusively seems to be techies talking about sectors they know quite literally nothing about.

My personal fave is still the obviously racist AI guy who claims hes not racist despite his mountain of racist posts that people dug up and hand waves it away as “Oh it was just me going to logical dead ends”… riiiiight.

I don’t remember anything like what they’re bringing up in any bio-ethics class I’ve ever taken. Pretty much seems to be eugenics 2.0… with a splash of mother theresa’s “the poor deserve it , god wants it”… nonsense that made me despise the evil little trash bag.

A very nicely written column. Well done, Cecil!

I know the column was about a specifically secular belief, but it would be interesting to compare to the themes of the Prosperity Gospel of many conservative Christian groups. There is, similarly, a remote, long-term benefit (of disputed existence, though I would note that believers generally accept an eternal heavenly afterlife) which can be used to justify sinful behaviors and attitudes in the short term so as to be able to have more resources for helping build the heavenly kingdom.

Good to have you back, Cecil!

Elsewhere, I proposed Doomsday-Discounted Longtermism:

By the doomsday-argument, the fact that we’ve been born so ‘early’ within the potential lifespan of the human race entails that it’s unlikely for there to ever be more than about a trillion people to be born, and with 50% likelihood, the total number will only be twice that of humans born up to now. Consequently, any current interventions should be weighted by the likelihood that sufficiently many people will come into existence to benefit from them.

This narrows down the gap between longtermist prescriptions, and things we should actually be doing, like fighting climate change: there are unlikely to be trillions upon trillions of people yet to be born; hence, we should concentrate on those most likely to come into existence—but they’re those to whom what we’re doing against the immediate risks will matter most.

So, you get to do what ought to be done anyway, but you can add a snazzy techno-philosophy edge to it: a win-win as I see it.

I say, the best way to fight crazily speculative ideas about future contingencies is with yet crazier ones!

Another major problem with longtermism, is there is (almost) always more future. Today, the suffering of billions is justified by the happiness of future trillions. But 10,000 years from now, the suffering of trillions could be justified by the happiness of future quadrillions. If a planet of misery can pay for galaxy of joy, why shouldn’t a galaxy of misery pay for a galactic cluster of joy. And so on forever. If some universal overlord/overclass can convince themselves that cruel exploitation of the vast universe of sentient beings is necessary so someday their decedents will avert the heat death of the universe, then they are morally obliged to own harems of unwilling sex slaves.
If you posit exponential populations growth in the future, the future population will always out number the present, and you never actual have to reach nirvana to justify your present or past actions.

Just sounds like longterism is an excuse to make as much money for yourself at the expense of others while trying to convince them and yourself that you really are a good person and not raping the earth.

What about effective altruism’s evil twin, Roko's basilisk - Wikipedia? The possibility of near-infinite future torment as opposed to mere extinction?

Good starter, Cecil. Longtermism is truly as nuts as it sounds. I keep waiting for Greta Thunberg to make mincemeat of it.

My specialty as a historian is studying how people in the past saw the Future. Some were remarkably prescient, many were comically awry. All, however, shared one inextricable flaw: they could only extrapolate the Future from their known present.

Asimov’s Foundation series is indeed the best popular failure to examine. His psychohistory was obviously taken from a mixture of popular beliefs and isms of the 1930s, from the sf-backed revival of technocracy to the theories of Edward Bernays, “the father of public relations,” who also wrote a book called Propaganda in 1928, to the various governments who practiced it before the war, which was raging when Asimov started his stories.

Asimov came very close to understanding why psychohistory couldn’t work. When his editor/mentor John Campbell told him to throw in an obstacle to the Foundation’s work, Asimov came up with the Mule - an unpredictable mutant. But we know now - and he should have known then, considering how the atomic bomb suddenly changed global politics even though it had been forecast for decades - that the Future is Mules all the way down. The unpredicted and overlooked drive every minute of the present and therefore its Future.

Doing good right now in the present for ourselves and our children and grandchildren is mandatory. Telling them to suffer for wholly unknown futures containing beings as imaginary as Hari Seldon and the Mule is despicable lunacy.

@John_W.Kennedy, not because he any particular knowledge in this that I know of, but because he was a prolific poster in Cecil commentary and not much else.

Thanks for a thought provoking column.

One would have thought The Notorious S.B.F. would have been familiar with the statistical math problem called “Gamblers Ruin” showing the eventual problem with double or nothing wagers.

I think there is something to the idea of effective altruism. But our best minds cannot predict when earthquakes will occur, the weather in a month, or when the Cleveland Browns will finally win a Super Bowl.

They can only use math to show the Cleveland Browns will never win a Super Bowl. Don’t people in Cleveland deserve to be blissful? How is the idea so different from promising faithful serfs a glorious afterlife as a reward for renouncing Earthly pleasures or demanding fairness? Surely better people dream of a worthy super yacht slightly bigger than that owned by some lesser plutocrat? We should be pleased that any time is devoted to considering the greater good although clearly charity begins at home.

I’ve heard people explain why it’s better to give $100 to a homeless organization than to give $1 to 100 different people who are homeless. Does that type of thinking align with this philosophy?

I don’t think so. The philosophy suggests it is better to earn more so one can give more. If the amount is the same it is more a question if distribution. If it is given in lieu of reasonable taxes than it is not as altruistic as advertised.

Good column. Someone should distribute it to the WEF idiots at Davos, who are all about longtermism and other justifications for bringing them more wealth and power.