Roko's Basilisk ensures we must support Mark Zuckerberg and his brilliant Meta project to save humanity!

The Basilisk is the message i.e. we are assumed to be at least rational enough to figure it out, so the underpinning “logic” is supposedly inevitable.

It’s all reheated Pascal, and that’s been done to death. Wannabe-Vulcans should have done more Phil101 classes, rather than writing their Drarry slash fiction.

ETA: I see this was Nov. last year, sorry about the late reply.

“Assume God is Evil” is implicit in the original Wager too…

I have read that Wikipedia article, some parts several times and I still can’t understand what is attempting to be said in that insane gibberish. Are they trying to say that our current world is a simulation of the past running in a virtual reality in the evil AI? Or that the evil AI has traveled back in time to perform this torture? And were there really people on that forum terrified by whatever that idea is, or were they larping along?

I believe the idea is that our current existence is real, but that, in The Future™, the Singularity would be able to replicate your consciousness and subject it to an artificial reality it would perceive as Hell.

Personally, I wouldn’t be frightened by that, because my theory of the self is that that future constructed me wouldn’t be me. It’d be an entity with my memories and a sense of being me, but I won’t be that me, because when my body dies my consciousness goes with it, and I’ll never experience being that me any more than I could experience being Oscar Wilde or Miley Cyrus or a badger.

Okay, found a better reference here. Less Wrong is basically an extremely pretentious nutbag cult.

To the extent the whole thing is supposed to make a lick of sense (and I would argue it does not stand up to scrutiny), you are supposed to find it probable, or at least find a non-negligible probability, that this is the “simulation”. Therefore, since hell is infinitely bad (and you cannot completely dismiss the possibility you might be going there), you had better believe in Evil God (and serve, too, of course).

Of course, if not dismissing the possibility of hell, one has to consider the possibility that one might be sent there for believing in Evil God. Or for not supporting one’s current fellow humans and/or members of other species (whether or not they exist, as long as we’re not dismissing possibilities.) If we’re in a simulation, maybe it’s a test. Even if we’re not in a simulation, maybe it’s a test. But we don’t know what the questions are, let alone what the right answers are.

The only sensible thing to do about that, it seems to me, is not to worry about it.

Mark Zuckerberg gave a speech today about distributing general intelligence to the people in general. I’m going to find a proper cite of it. I think this kind of AI could prove useful if people had discipline to check other sources. Otherwise people might be Flocks of Sea Gulls like me who take information and fly really quickly away.