Is longtermism really the most dangerous secular belief system in the world today?
Most dangerous? Piffle. Longtermism – in its extreme form, the idea that threats like evil robots destroying humanity should get priority over hiccups like climate change – is a blessing in disguise.
I acknowledge not everybody thinks this. Philosopher Emile Torres, who has written extensively about longtermism, calls it “quite possibly the most dangerous secular belief system in the world today,” which can be used to justify the priorities of rich white people while ignoring the problems of everybody else.
OK, so longtermism has its problematic aspects. But come on, Emile. Must we dwell on the negatives?
Before getting into the nuances of this miracle philosophy, I’d better address what, for many, is likely to be a more pressing question: “Why the F should I care about longtermism? I’ve never heard of longtermism.”
Until a couple months ago, neither had I. Then it appeared in a tech mailing list I’m on, with no indication what it was, although from the context I deduced it was definitely bad. I saw it again the next day, and in seemingly no time thereafter longtermism was sprouting like zits all over the Internet. For this we can thank Samuel Bankman-Fried, a/k/a SBF, the notorious ex-CEO of the FTX financial exchange, who in addition to having built a vast cryptocurrency empire was a proponent of the philosophical movement known as effective altruism, one of the weirder offshoots of which is longtermism. By virtue of his spectacular flameout, SBF has managed to nuke all three, or at any rate materially retard their progress, which is a considerable accomplishment. OK, allegedly vaporizing at least $7 billion in other people’s money was a lot of collateral damage. But you can’t say the man hasn’t done the world any good.
You see where I’m headed with this. But first we need to explore what longtermism is all about.
The starting point is effective altruism, referred to hereinafter as EA. EA, we’re told, “aims to find the best way to help others, and put them into practice.” This sounds noble, if sappy. Never fear. The innocuous words conceal unsuspected depths.
The central problem of EA is figuring out the best way to help others. Some important EA concepts:
-
Earning to give means you strive to make a big pile of money with the intention of donating, say, 10% to some cost-effective charity. You can see where this would have a better chance of budging the needle, human improvementwise, than working in a soup kitchen. In fact, the idea inspired SBF to go into the cryptocurrency business.
-
Expected value (EV) is the net benefit of an act times the odds of it happening. This gets into some math, but the upshot is, you want the greatest good for the greatest number. In other words, the more people that benefit, not just now but in the future, the higher the EV and the greater the collective good. If humanity were to survive and multiply until the heat death of the universe and colonize the whole of the cosmos – grandiose scenarios like this often figure in EA theorizing – total EV potentially runs into the gigajillions of “blissful lives,” another common EA term.
-
An existential risk is anything that threatens the extinction of humanity. This is the worst thing that could possibly happen, in the EA view of things, since, after an existential catastrophe, EV drops from gigajillions to zip. A lot of EA people think there’s a good chance superintelligent machines will exterminate humankind, whereas climate change will wipe out the Bangladeshes of the world but spare affluent folk like us who can take precautions, and eventually the population will recover. Ergo, evil robots are worse than climate change and are what we should really be worrying about.
Given all the above, longtermists can easily demonstrate mathematically that even the tiniest non-zero reduction in existential risk, however farfetched, outweighs all short-term do-gooder efforts, even those with a decent chance of success. In other words, never mind trying to eradicate malaria or global poverty, let’s focus on threats thousands or even millions of years down the road. That’s longtermism.
You may say: that’s nuts.
Of course it’s nuts. Longtermism is predicated on so many crackbrained assumptions it’s hard to believe it was dreamed up by adults. To cite a few:
-
Whatever Classical Utilitarians may think, there’s no good reason to think 20 billion blissful people is necessarily better than 10 billion, and given finite resources, a lesser number seems more sustainable than a larger one. The idea that there’s a moral imperative to cram the cosmos full of happy beings … get out.
-
People who think the future is knowable are kidding themselves – on the contrary, some claim, history is a long string of unpredictable black swan events. The belief that we can make meaningful predictions about the impact of current actions on the distant future isn’t science, it’s mysticism. That’s not to say we should ignore reasonably well-established threats to our descendants such as climate change or resource exhaustion. But let’s not get ridiculous. If you start out with bizarro assumptions, don’t be surprised if you get bizarro results.
-
Your existential risk may be my off-the-wall speculation. People have been fretting about superintelligent AI or its granddaddy, Frankenstein’s monster, for more than 200 years. OK, autonomous machines might present dangers that require precautions. But how hard is that? Nobody ever heard of Asimov’s Three Laws of Robotics?
Lest you think this is a purely academic debate, longtermist thinking has had real-world consequences. In a much-cited interview prior to FTX’s meltdown, economist Tyler Cowen asked SBF:
Let’s say there’s a game: 51 percent, you double the Earth out somewhere else; 49 percent, it all disappears. Would you play that game? And would you keep on playing that, double or nothing?
SBF, after some weaseling, tacitly acknowledged that he might: “Maybe you [win] an enormously valuable existence.”
No, you don’t. Double or nothing, in Cowen’s formulation, doesn’t mean you play until you win. It means you play until you lose. (This is a variant of a confounding type of game called the St. Petersburg paradox.) As even some EA proponents ruefully acknowledge, this was the strategy that led to SBF’s downfall – he kept doubling down on risky bets until he lost everything. That was bad news for some venture capitalists, cryptocurrency speculators, and other such folk. But – and here we get to my point – it had no impact on anyone else.
That’s the beauty of longtermism. It’s so manifestly crazy, and has so clearly resulted in disaster, that it’s inherently self-limiting – a danger chiefly to those who buy into it. Maybe it didn’t single-handedly tank cryptocurrency, but it has surely hastened the day, and meanwhile it has thinned the herd of billionaire tech bros, in the best Darwinian tradition. What could be bad about that?
– CECIL ADAMS
After some time off to recharge, Cecil Adams is back! The Master can answer any question. Post questions or topics for investigation in the Cecil’s Columns forum on the Straight Dope Message Board, boards.straightdope.com/.