The recent post about another paradox reminded me of something I came across some years ago in a column by Martin Gardiner in Scientific American. It may have been called “Newcombe’s Paradox”. At the time, there was no ‘resolution’ to it (not that such a thing is possible in a true paradox). I’m curious to know if there’s been any developments or if any of the Dopers would care to offer their insights.
The problem is this:
**Suppose there is a very clever, very insightful being who is an expert on predicting human behaviour. In fact, this being has been nearly 100% accurate in predicting the behaviour of individual humans since records were first kept, eons ago. Thankfully, the being is benevolent and wants to reward humans, although he doen’t like greediness. He offers you, a human, the following scenario:
There are two closed boxes “A” and “B”. The being gives you a choice. You can take both boxes or only box “B”.
Now, in box “A” the being places $1000.
In box “B” he places either $0 or $1,000,000 depending on whether he has predicted that you will take both boxes (you’re greedy) or just box “B” respectively (not greedy).
You make your decision after he places the cash.**
What would you choose in order to maximize your gain?
The main thing I recall about this paradox was that whichever “side” people take, they are convinced that those taking the other viewpoint are being either silly, argumentative or just plain stupid!
(BTW, I apologize if this “paradox” is now passe or hackneyed. If so, I missed it.)