I’ve read SlateStarCodex in the past. The author has some interesting essays that have made me think, but ultimately he struck me as the kind of person who thinks you can divorce emotion from reason and achieve some state of rational perfection, and his ‘‘rational’’ defense of the oppression of women left a bad taste in my mouth.
I wonder if Clearer Thinking with Spencer Greenberg came from this movement? I love that podcast, but it would explain a lot. They have interviewed Effective Altruists before.
Re: Harlan Ellison, I wasn’t trying to claim the Rationalist movement was borne of that story, and I apologize for my imprecise language. I meant that this fanatical group of people behave as if the story were some kind of prophecy, which is silly. It’s hard to imagine it not having an influence on the philosophical idea of Singularity, though. People get their ideas from science fiction all the time, and I’m hard-pressed to believe people deeply interested in AI had never read the story.
Not sure what you’re talking about. AFAIK the problem with effective altruism is that it went down a utilitarian rabbithole of assigning more value to millions of potential lives in the far future than to saving lives today. It’s not like it’s the movement’s fault that Sam Bankman-Fried turned out to be a fraudster.
Yeah, this is what rationalism is supposed to be and it’s not related to Cartesian rationalism.
Ziz took ideas from rationalism but combined it with messed up pop-psychology, anarchism and who knows what else. And she and her friends/followers/cult members took her ideas seriously. Most people don’t really act on their beliefs, and that’s a good thing: I think it’s been mentioned before that most pro-lifers do not act like hundreds of thousands of babies are being murdered each year. The vast majority of vegans don’t attack factory farms or try to force other people to give up meat, despite thinking animals are undergoing terrible suffering. Christians don’t spend all their time proselytising, despite thinking non-Christians are going to suffer for eternity in hell. This group did act on their beliefs, and a lot of people died.
I recently saw this account of what happened, it’s okay to post as far as I can determine:
IIRC Ellison’s shorty story collection Death Bird Stories has a warning in the front. The warning makes it clear that it is not there for humor or publicity. It goes on to say that you should take frequent breaks while reading the book to avoid permanent damage to your psyche.
The idea that money is best spent for the most utilitarian good, is, in my view, fine – though it’s not really what drives most donors. (Also it must be inherently subjective, as any metric you use to determine the maximum possible good will have some other, competing metric that might be just as good. This is the kind of thing that would cause me analysis paralysis.) Where it gets shaky is that Effective Altruism’s major proponents are rich people who have convinced themselves that hoarding wealth is fine because eventually it’s all going to a good place. I believe SBF did this to the extreme - he rationalized himself right into prison. That’s not all there is to it, but there’s a lot of that.
Yeah, I think that damage has been done. If I could undo reading that, I would.
I recalled your challenges with analysis and empathy paralysis in a fun hijack in the Pit (again, Pit warning). Making good choices requires a degree of self and world analysis that a lot of the tech-bro subset of Effective Altruists don’t bother with IMHO. Instead, just as you pointed out, they mostly seem to be arguing from an end state where they’re filthy rich, and work backwards to assemble a “moral” justification for that end state.
NOT that they’re alone in this by any means. Humans are very good at being ruthless / criminal / evil to get what they want, and then later on, attempt to buy their way into respectability / heaven / etc. via their blood money. It’s endemic to the species I suspect.
And of course (full circle back to cults) the same morality of allowing any cult to dictate what is the greatest good is risky. If a single person (or small group) can dictate to the rest what is good, and what is evil, then you run the risk of abandoning your own ability to judge, because the definition of “good” is that of the cult or religion to which you have granted authority. For all my known distrust and dislike of organized religion, the most wide reaching ones generally (yes, HUGE simplification and generalization) act to dilute the power of specific individuals to reduce the risk. But even there it isn’t gone, and modern mass communications arguably makes it easier for a single populist leader to reach out (religious or otherwise) and bypass all the traditional methods of dilution.
I mean, if you really believe in EA, then fraud, as long as it makes people who believe in EA richer, is justified because it is better for all those millions who benefit from EA as opposed to all those ineffective altruists having that money. Hence, as Galbraith said, “a truly superior moral justification for selfishness.” (Of course, he was talking about conservatives when he made that quote, as EA was still in the future. It doesn’t apply any less.)
Textual attribution nitpick: While I agree that avoiding any or all Rowling-related content is a perfectly reasonable response to her raging transphobic bigotry, AFAICT the concept of the so-called “Amulet of Tyranus” or “Phylactery” in the Harry Potter universe is a fanfic invention, described in the Harry Potter Fanon Wiki.
Rowling herself, AFAIK, never wrote about any such amulet or anything she called a “phylactery”.
He explained a lot of phenomena I’d been seeing but had trouble putting into words, and that’s… kind of a relief in some ways. Like “no, you’re not imagining it - and now you can explain it to other people too”. But I don’t really know what you mean by divorcing emotion from reason. Something like that is, I guess, part of rationalism, but that may or may not be what you’re referring to.
I don’t think I’ve seen this and I’m not sure I want to.
Oh good. That was the impression I got from what you wrote, and it’s a very misleading one. I kind of doubt the story had much influence, since it’s about an evil AI, whereas what Yudkovsky and most of the other AI researchers are worried about is an AI that is uncontrollable (because it’s many times smarter than us) but not ‘aligned’ ie it has been programmed with goals that don’t correctly reflect human values and morality, and pursues these despite causing great harm to humanity. Possibly Roko might have been influenced by it what he came up with his basilisk?
Utilitarianism never appealed to me; it’s too prone to giving counterintuitive results. My philosophy is that if your theory of morality tells you to do something obviously wrong (like for example murdering someone in order to prevent animal suffering via some galaxy-brain method, or committing fraud in order to give the money to charity), you should abandon that theory, not do the wrong thing.
I haven’t really looked into it, but given how (in)accurate this thread has been on rationalism generally, I’m not going to take this on trust.
Welp, I’ll cross that off my reading list.
The horcruxes have a lot in common with the DnD version of phylacteries. I’d guess both were inspired by folklore, though.
I could take it or leave it. I’m probably more Kantian, but certainly I’ve made utilitarian arguments before. I’m just a pragmatist. I absolutely have a moral compass but it doesn’t neatly fit into any sort of philosophical framework, though Buddhism is close enough.
Now that I’ve (mostly) recovered from that story, I want to emphasize that many people within that internet subculture, which birthed this cult, actually believed a future AI could create an eternal, tormenting hell for people, and were deeply disturbed by it. This is a real belief that people who identified as Rationalists actually have. Whether that represents most Rationalists… sounds like probably not.
Now that I have some psychological distance from the story, it kind of illustrates how silly this idea is. Elements of the story don’t make sense. The AI can somehow keep humans alive, kill them, and then “rejuvenate” them, which implies a corporeal existence where it has the power of life and death yet controls their reality. But the story also establishes that the characters can kill themselves if they are quick about it – but not be brought back to life? Also, this AI hates humanity because… somehow it’s limited, or can’t be creative, but it seems to devise countless ways to torture humans. It just doesn’t wash. Is there any scientific basis or rationale whatsoever for believing this would ever be possible? It sounds like pure fantasy to me.
I’ve encountered plenty of AI doomers on Twitter, and not one of them is worried about Roko’s Basilisk or this kind of ‘malevolent AI’ scenario. (Not even Roko! He’s an annoying alt-right type who posts about issues that would be irrelevant if eternal-torture-by-superintelligence was imminent.) They expect AIs to harm people in the same ways multinational corporations harm people: they are focused on profit or some other goal at the expense of things we think are more important, like not polluting the environment or not selling addictive products that damage people’s health. Only we can regulate the corporations, but AI doomers believe we won’t be able to control the AI and it will be vastly more powerful than any multinational.
Most notably, the story of Koschei the Deathless, a Russian sorcerer who found a way to hide his death in inanimate objects. And the Prydain books contain a wizard named Morda, who channeled all of his lifeforce into one of his fingers, then cut the finger off and hid it, which might be inspired by Welsh myth (most of that series was).
Roku’s Basilisk, meanwhile, is just “Pascal’s Wager, but assume that God is evil”.
From what I’ve been reading on various sites it’s a reference to the Simurgh from the webfiction Worm, who is often nicknamed “Ziz” in the Worm fandom. Considering that the Simurgh is a malignant mind-controlling alien being also called “the Hopekiller” in-universe, it’s creepily appropriate for a cult leader. From here:
Some rationalists were surprised, and a bit put off, when Ziz announced that she would now be known as Ziz. The name comes from Worm, a roughly 7,000-page serial fantasy story that many rationalists have read. Ziz is an alias used by a monster called the Simurgh, part of a group of villains called the Endbringers.
The Simurgh has an unsettling power, a reader of Worm told me. She’s an infohazard: anyone “who has encountered the Simurgh for too long, listened to the Simurgh for too long, becomes a liability. Because at some point in the future they will go crazy and cause a bunch of destruction.”
Makes sense. The other two most prominent Endbringers are Leviathan and Behemoth, which fits with “Ziz”; and the Simurgh was thought benevolent at first and looks angelic, which IIRC was why she got her name. Before she drove everyone in the capital of Switzerland murderously insane, at least.
Still, if I was a cult leader I wouldn’t pick a name that almost literally advertises what I am. It’s like somebody offering you a deal calling themself Beel Z. Bub.