Roko's Basilisk ensures we must support Mark Zuckerberg and his brilliant Meta project to save humanity!

For those unfamiliar, Roko’s Basilisk is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development.

Now, I’m not sure how the AI actually goes about doing that or how it “incentivizes” people. Other than maybe the people who are nerdy enough to be concerned about Roko’s Basilisk would be the ones working on AI anyway.

But, given the thought experiment, shouldn’t we all be doing everything possible to enable Zuckerberg’s brilliant plan to create the Metaverse, lest his avatar suddenly appears in front of us to tell us it worked (and asking us why we didn’t do more to help)?

:astonished:

My argument would be that since I have no pertinent skills to contribute to such an effort I did what I could by not standing in the way of the project. I contributed by NOT imposing obstacles.

Sorry, I’m taking the red pill.

I don’t see how that squares with Mr. Zuckerberg or his Meta project. He is not benevolent, not acording to my definition anyway, and he is doing his evilness today. I would even go so far as to state that if humanity’s saving is contingent on Meta’s success, then humanity is not worth saving. Therefore the OP’s premise is false and Roko’s Basilisk can go take a running jump.

I have no dog in this fight but thanks for making me realize a character in a webcomic who is an actualized AI robot I read is the pun I thought it was …

Who also shows up in a plot line which involves, among other things, a “beneficial” AI which embeds people into a virtual reality…

I mean, Meta isn’t trying to make a superintelligent AI, either, as far as I know.

The incentives for Roko’s Basilisk are that a simulation of you in the future will be punished, and that you should care about that, because we ourselves are more likely to be a simulation than not (assuming simulations ever get made). It was essentially a thought experiment to create a version of hell.

It fails because all you have to do is decide that you would ignore that type of blackmail, and then it no longer is logical to use blackmail to motivate you. The reason it became infamous is that it freaked out the supposed uber-rationalist leader of a certain movement, resulting in him banning all discussion of the idea.

It shows the flaw of what that movement (known as “Less Wrong,” was trying to do, which is try to come up with the one true rational way of thinking. The premise of Roko’s Basilisk was deliberately designed to exploit philosophical ideas that the community believed were inviolate.

I know about this because the founder did at least write a pretty cool fanfic known as Harry Potter and the Methods of Rationality, which is basically “what if Harry Potter was a Ravenclaw prodigy facing an intelligent Voldemort?” It’s a power fantasy for those who often feel like the smartest person in the room.

Pascal’s Wager is supposed to be pro-Catholicism. I see that you (and/or Roko) slipped in “Zuckerberg” instead of “God”. Easy mistake to make…

I suppose I’m not clear why it needs to be a “superintelligent AI” or why it needs to be “otherwise benevolent”. It seems to me the thought experiment (as I understand it) is whether you should support what appears to be a rising power that will inevitably take absolute control, knowing that after the fact you could be punished if you are perceived as having resisted it’s rise.

What’s also not clear to me where the threat of blackmail comes from. Like why would I presume an otherwise benevolent AI or whatever would be incentivized to torture those who did not tirelessly contribute to it’s development? Unless it also developed time travel, there is no way for the AI to message the threat to a time when it would have mattered and after the AI is up and running, I would think who did or did not work on it would be largely irrelevant.

Unless the AI was programmed to be exceedingly petty.

I would think it would also fail because most people wouldn’t have heard of it, and most who have heard of it would think it was nonsense; and so as a blackmail technique it would be useless on almost everybody anyway.

The whole “basilisk” part of it is the idea that it only becomes an issue for you once you hear about it, similar to looking in a basilisk’s eyes. Until then, punishing you wouldn’t make sense.

That’s exactly why the group leader banned people talking about it.

The OP neglects the time-travel aspect of Roko’s Basilisk so I’m not sure why I should support Meta.

The incentives for Roko’s Basilisk are that a simulation of you in the future will be punished, and that you should care about that, because we ourselves are more likely to be a simulation than not (assuming simulations ever get made). It was essentially a thought experiment to create a version of hell.

No RB travels to its past / your present to torture you to encourage you right now to help develop it.

I remember reading where predictions of any kind average about a 9% chance of coming true. Those fearing an AI-controlled future are echoing fears going back to the 50s when computers began their ascent. Several ST:TOS episodes and 2001: A Space Odyssey assumed computer-controlled environments would lack human compassion and understanding, and therefore would forcibly remove the inefficient human component of societal development.

Following Zuckerburg’s META vision is the equivalent of going along with an evangelist’s proclamation that the world will end on such-and-such date. Best to play along in case he’s right, but nobody’s been right yet.

I’ll place my bets with the most overwhelming possibility. In case of zombie apocalypse, I’m betting I’ll be one of the 99.99% who become zombiefied. I’m not going to fake belief in God in case there’s a Heaven, however.

It doesn’t necessarily have to be friendly. It’s just that it being friendly wouldn’t stop it.

It does has to be superintelligent, however. It has to be able to outsmart humans, otherwise we shut it down. It can’t be a program that a single human controls.

Furthermore, in the original version at least, it needs to be smart enough to perfect simulation of a human. The idea is that this AI may not in fact come in any of our lifetimes. And the AI can’t go back into the past. So it instead punishes our simulated duplicates.

And said simulated duplicates are essentially the LW version of the afterlife. Roko’s Basilisk was bringing the concept of Hell into their belief system.

Edit: Do note that it was only presented as a possibility, not something that would definitely happen. But the punishment was so severe that even a small chance of it happening was “rationally” something you should avoid.

There’s also the question of why that specific AI would care about what was done in the past to promote or prevent its own rise. Had we all contributed every effort to making an AI arrive even earlier than it did in reality, then that would of necessity have been a different AI. Even small changes in its history could end up causing massive differences.

It’s like you getting mad at your parents for not getting pregnant six months earlier. Whoever that kid turned out to be, it wouldn’t be you, so why would they care? Why would you?

So if I’m not being tortured right now, then either time travel is impossible, even for the Super AI, or the Super AI doesn’t care about this any more than we do. Either way, I’m good.

I have set up a bot network to spit out as many predictions of “tomorrow is Friday the thirteenth” as is technologically possible. That should knock this down below 1% fairly quickly.

Reading a certain Jef Poskanzer, who seems to call himself batman in mastodon claim that concerning what’s going on with Sam Altman and OpenAI and ChatGPT and microsoft boils down to:

The OpenAI tussle is between the faction who think Skynet will kill them if they build it, and the faction who think Roko’s Basilisk will torture them if they don’t build it hard enough.

reminded me of this thread. Little has happened here since the last time I was here, but the reference to Mark the Zuck has become hilarious. We should celebrate that, we did not have that much to celebrate lately.
This also makes me think that if limited knowledge powerless me cannot contribute to the development of this putative Super-AI in the future Roko’s Basilisk would logically compel me to do the only rational thing I can do: avoid the development of this Super-AI. It is the only way to avoid its future vengeance. I guess the best way to avoid this AI in the future is to kill the future developers of this alleged AI, Terminator-style.
As no AI from the future has come back to haunt us I reckon we will be successful. So no need to act, everything will solve itself.

First they came for the holograms, and I did not speak out—because I was not a hologram.

Yeah, the Basilisk is basically just a rehash of Pascal’s Wager, but with “Assume God is evil” tacked on for no particular reason.