Avoiding a monkey's-paw-type wishing scenario.

Consider an encounter between yourself and an arbitrarily powerful entity whose motivations you are uncertain of - is it possible to request something from this entity in a way that avoids or minimizes unintended consequences?

There is a standard literary trope of the monkey’s-paw type of wish, in which a wish is granted but in a way that the wisher regrets: the wisher gets riches because his child dies and he get a large financial settlement, for example. This seems to apply to wish-granting entities that are either malevolent, like genies, or which simply don’t view the world in the same way as the wisher, like aliens or an AI. Religious individuals pray for things all the time, but they do so in a way that assumes tha the wish-granter is omnibenevolent, +/- omnipotent and omnicient. Encounters with other powers can’t rely on that, or shouldn’t anyway.

I would guess that such a wish would have to include aspects that refer back to the wisher, like ‘read my mind without altering it and then grant me wish X in a way that I can’t consciously articulate, but which I would think is the best possible way for my wish to be granted’.

The scenario of a person trying to fashion a wish in such a way as to avoid a ‘gotcha’ type of outcome must be a known problem in some field, right? - logic? economics? game theory? theology? philosophy? Does the problem have a name, or a solution?

TL;DR: Come the Singularity, who do we ask to ask I AM to fix global warming without I AM Killing All Humans?

The arbitrarily powerful entity is a short story author who is going to turn the tables on you.

If you like, there’s an old X Files episode partly about trying to write a wish to a genie whose wishers wind up regretting it.

Sure. “I wish that every time I roll a pair of dice, they add to 7”

You realize that any such scenario you come up with will just be seen as a challenge to some genie, jinn, cynical Doper, or author.

I wrote a story many years ago in which the lead character was a sorcerer who tried to avoid unwanted countereffects of his wishes by only requesting information from his trapped demon (and it had to be accurate and correct information), figuring that, if he didn’t give the demon any way to act, he’d avoid unwanted results of wishes. He could himself choose to act, or not, on the information given. And he would choose how to use the information. He figured it was the perfect setup.

The demon, who didn’t like being a slave to the sorcerer, still figured out a way to get back at the sorcerer, despite these limitations.

I like this solution.

Part of what these stories indicate is that human language itself doesn’t have the kind of reliability and predictability you need to define wishes. Or programming code. As it so happened, basically every programming language itself has the same problem - no language currently is “airtight”, in that you will always get what you think you will get from the way the code appears to read. There are hidden exceptions and failure modes that will not be apparent when you read the code as written.

In any case, arbitrarily powerful and malevolent beings will just deliberately misinterpret what you said. In fact, if you think about it, that’s what they are doing. They know what you wanted, and what you asked for is the highest probability meaning of the words as translated from English to some more rigid language. What these beings are doing is actually an error if they were a computer.

This is how we make an AI that doesn’t fuck with us like that - we design it so it can’t reprogram itself, and that it always picks the most commonly “what I meant” interpretation from what you told it.

I wish to be happy?

Ignorance is bliss–BAM! Your IQ is decreased to 5.

Been there. I wished for clarity. It didn’t bring me the happiness and easy life I had imagined it might. But the unintended consequences weren’t too painful. There ARE times when ignorance is bliss, it seems, and clarity is actually painful, in it’s own way.

There’s no way out of this. The best approach is to wish you’d never met the wish-giver and hope that means you will simply blink out of existence.

You don’t have to make a wish, right? The only winning move is not to play. If the Expected Value of any wish you make is negative (worse than not making such wish), because the agent granting the wish is malevolent and is going to make sure that EV is negative, then you come out ahead not making one.

Since this is not a real-world situation, let’s move it to IMHO.

Colibri
General Questions Moderator

I wish for this to be a real-world situation.

“I wish that you were omnibenevolent, retroactive to include this wish.”

Second wish: “I wish that you were omniscient too.”

Third wish: “I wish that you were unfettered to use your wish-granting powers at will, unfettered by an absence of people wishing for things.”
Poof - I’ve just created an ACTUAL omnimax god, or as close as the deity can get within the limits of its wish granting power. Things should immediately get interesting - in a pleasant way, like we don’t currently have because there currently isn’t an omnimax god. At a bare minimum I (and many, many others) should live happily ever after.

“Your species’s existence increases the entropy of the universe. Turning you back to dust”

My b.i.l. wasted several hours crafting an “unbreakable” contract for use in a D&D game. He was absolutely convinced that he had covered every possible loophole.

I didn’t argue with him…but, in my mind, I just wrote, at the bottom of the contract, in small letters, “If I feel like it,” because he hadn’t actually specified that the contract could not be modified at the whim of one of the signers.

SamuelA has the right insight here: our language doesn’t support a contract of this sort. That’s why, in real law, contracts depend on a “meeting of minds.” If one of the signers is hell-bent on deceiving the other, well, there never actually was a “contract” in any recognizable form, just a bear-trap in disguise.

I don’t mind the move or the inventive wishes of others, but I was actually looking for a factual answer to the question, which is whether wish-granting interactions, or any sort of interaction with a sufficiently powerful agent, are a problem that is considered in some established field, outside of fiction.

Do AI researchers or transhumanists know this as the Rothko’s Sneaky Bastard problem?

Do theologians call it the Prayer to the Wrong Kind of God problem?

Do game theory folk argue about the issue of AntiAltruism?

Is this Kant’s Third Reason that God is Dead?

that sort of thing…

That’s not the generally accepted meaning of omnibenevolent - they go for the best solutions for all. Which at a minimum I would imagine would involve putting everyone and everything problematic into hermetically sealed habitats where they can’t harm others and still can enjoy themselves in environments that don’t mind the damage the problematic person tries to do.

And in any case the mere act of making the entity omnibenevolent doesn’t free him to act on his new benevolent whims, any more than he could act on his previous malevolence and annihilate me before I got a word out. It takes a second (well, third, I wanted full omnimax to avoid unintended consequences) wish to empower the entity to act.

Wish for a twelve inch pianist.

What’d the demon do?

Or you know, tell me where I can read your story.