Well, like most CTs the clues are right there for you to see, if only you’d shed the scales from your eyes. e.g. the Eye of Providence atop the pyramid on the reverse of the Great Seal of the United States.
It’s so they can snicker behind their hand or something.
You’ve completely missed the point of “rationalism”. That is, there are no clues. It’s pure “logic” and deductive reasoning, with conclusions drawn from first principles and from which you must not waver no matter how much “facts” and “evidence” may be heaped against you. You must commit yourself to timeless decision-making with the singular goal of bringing about the good singularity, lest you be imprisoned and damned for all eternity by evil singularity that is as sure to come as water follows rain if you do not first bring about the good.
It wouldn’t be rationalism if it didn’t. You’ve got to be a double-good thought-wizard (one of those two terms I just made up, try and guess which!) to really understand.
Anyway, I listened to the Behind the Bastards podcast on this (I think there were 4 episodes in all?) so I’m kind of channeling that. Would highly recommend.
The phrase “Logic is a means of going wrong with confidence” is one that occurs to me.
Logic is only as good as your initial assumptions and heads off the rails as soon as you make a mistake, so “pure logic” is at best extremely unreliable unless you can and do check the results against reality. And that’s when you are using actual, strict logic rather than “it sounds smart so it must be logical” pseudo-logic like “Rationalists” do.
Rationalists are a good example of the general rule that when a movement makes a show of calling themselves something, they probably aren’t.
That’s the other kind of Rationalism. They aren’t related. This one is supposed to care about facts and evidence. It’s the people who say some fact or evidence has made them “update their priors”, because it’s all about Bayesian inference. It’s very caricatureable, it’s just nothing like your caricature.
I managed to read that and some other Ellison works in college, but I don’t recall much of anything except it was really weird.
To think that this spawned a cult is simultaneously bizarre and natural.
And yes, by all accounts Ellison was an enormous asshole. Although the one incident I heard about at my college was precipitated by someone trying to cause mischief.
It didn’t. The two have nothing to do with each other. The only connection is that some Rationalists are concerned about a future AI that tortures humans. The AI in the book doesn’t even resemble the Rationalist one (see Roko’s Basilisk).
Not really that either, but there is at least a real connection there. I have in fact read Harry Potter and the Methods of Rationality. It’s, uh… sort of odd, but I found it enjoyable as a kind of philosophical treatise. More accurate to say that it’s part of the Rationalist “canon” than that it’s an inspiration per-se. The author, Eliezer Yudkowsky, is definitely a member of the “AI will kill us all” camp (though I’m not sure of his current position on Roko’s Basilisk).
The Ziz cult seems to have combined several things together which are not themselves extremist but at least a little fringe, like Rationalism, anarchism, veganism, transgenderism, and sleep modification (ism?). Maybe more? Take all of those to an extreme, mix 'em together, and you have a real cult.
Yes, I agree. More accurate. That’s just the one that jumped out at me. Say what you want about Ellison’s short story: it’s at least mainstream (and short). But seriously, a 100,000-word work of HP fan-fiction? That’s how you know you’re dealing with a bunch of wing-nuts (no offense—I mean specifically that they seem to have been so heavily influenced by it, to the point of it becoming, as you say, part of their “canon”).
I can’t tell if there’s any direct influence there. HPMOR is part of the Rationalist canon just by virtue of being written by Yudkowsky and featuring a character that mostly acts like one (he’s pretty much a stand-in for Yudkowsky himself). But the “normal” Rationalists don’t treat it as a religious text, just as something you might read to understand their beliefs. You could also just read the LessWrong website.
Did the Zizians treat it as an actual religious text inspiring action? That I can’t tell. I think the answer is no, though, and the connection here is just Rationalism rather than something explicitly HPMOR related.
IIRC his position was that Roko was an idiot to post the idea. Roko’s basilisk is something non-rationalists have picked up and popularised, it was never a big part of their thinking. The real fear of AI doomers is unaligned AI, not evil AI. Because AI doesn’t inherently have human emotions; it doesn’t naturally care any more about human life than ant life, or the ‘lives’ of stars. We can try to train it to care and have particular values, but this is in practice hard, verging on impossible, because it’s so difficult to specify the full range of desired and undesired behaviours. The AI ends up being rewarded for a proxy goal rather than the real goal: kind of similar to how humans evolved to want sex, rather than to want kids directly. As long as contraception didn’t exist, this worked fine, but now we are no longer achieving the goals our genes ‘trained’ us to attain.
We’ve already seen this kind of behaviour in the real life AIs. I’m just hoping there will turn out to be a limit/diminishing returns on intelligence so we don’t end up with super-intelligent AI.
But was it bad to post that because it caused a bunch of people unneeded mental distress, or because it was an actual, functional “basilisk” that guaranteed that anyone who looked at it would be tortured by an AI in the future (unless they dedicated their lives to bringing about the AI)?
Right. I Have No Mouth And I Must Scream is about an AI that hates all of humanity for creating it at all. I don’t recall if the reasons are explained, though being an immortal, hyperintelligent AI with no one to talk to is probably part of it. Roko’s Basilisk is really the opposite: threatening to torture people in the future for not bringing it about earlier, thus reducing the total number of paperclips that can can be produced.
So far, it seems that it’s hard to align AIs on anything at all, paperclip maximization or otherwise. The training process is inherently sloppy. The AIs don’t always do what they want, but they also don’t get hyperfocused on one thing at the exclusion of all else. They’re useful but getting them to do things is like pushing on a wet noodle. They just don’t do things on their own, and they get “distracted” easily and wander in random directions if you don’t refocus them.
Someone on Twitter dug up this early reference to it, but it’s still not the original discussion:
Comment from Yudkovsky:
Yes. The AIs we’ve got are pretty unlike what anyone was expecting, which shows how hard it is to predict the future. Like you say, they don’t do things on their own; they aren’t agentic. And that’s a good thing. I don’t think concern about them is at all irrational, given how many people working in the industry seem to agree with it.
Based on what’s described of Ziz in the first Behind the Bastards episode, the level of grandiosity plus deviation from reality seems to point to schizophrenia. I had a schizophrenic uncle who was incredibly intelligent. It was often difficult to understand when he was talking about actual astrophysics and when he was doing his delusion thing. I imagine a similar issue is at play here.