Nash’s work wasn’t the beginning or end of PD analysis. It was just a demonstration that his finite-equlibrium theory describes a well-known finite-iteration game (the finite-iterated prisoner’s dilemma). A good deal of other analysis followed, showing that in games that are iterated, players learn each other’s patterns and cooperation tends to emerge. Robert Axelrod held a number of tournaments which demonstrated empirically that in iterated games, cooperative strategies do better, while defective strategies do worse. Nash didn’t contest this, he just observed that these were different conditions than his equilibrium deals with.
I could’ve been clearer that I was referring to people communicating revealing their preferences and tendences over the iterated game, but I’m not really interested in debating what’s the “officially defined” prisoner’s dilemma. There are variations that reveal a lot more interesting dynamics than the artificially limited version that Nash formally treated.
The point is that if we’re able to communicate with one another, and we’re able to infer one another’s strategies via past observation, then the preconditions for the Nash equilibrium aren’t met, so it’s stupid to behave as if we’re in an artificially constrained information environment that requires us to accept a sub-optimal equlibrium.
Why do you think that probability works differently at smaller scales? I would dispute this. In fact I’d suggest the opposite. Given a smaller group, people would be expected to have more intimate knowledge of each other’s habits and tendencies, which would tip the calculus toward cooperation (in cases where everyone has cooperative habits).
A lot of this discussion is really predicated on your basic assumptions about human nature. If you assume that nature always favors maximizing self-survival, then you’ll reflexively pick red. But a lot of that belief is a byproduct of so-called social Darwinism, and it doesn’t really hold if you look at deep evolutionary history, not just in humans but other species.
We’re not trapped in a zero-sum game where we can’t communicate, cooperate, or assess cooperativeness based on past behavior. Maybe the red-button/blue-button game assumes this, but it’s not stated as such, in which case it can’t really be described as the finite blind prisoner’s dilemma.
If we assume that prior consultation is possible, then if we can persuade everybody to press red, then nobody dies.
And if we can persuade fifty percent plus one to press blue, then nobody dies.
I don’t believe that it’s possible to persuade everybody to do anything at all. It is often possible to persuade just over half of people to do something.
If there’s communication and time to prepare available then as soon as it becomes clear red is in the majority the blues will fall like dominoes, it’ll go from 51% to 99.9% in no time flat. Getting a comfortable majority for red where there’s no individual downside is a lot easier than convincing everybody that everybody else who is pinkyswearing that they’ll vote blue, really will.
Yup. But that’s not the prisoners dilemma. “The lesson” of the prisoners dilemma as it was defined was its stupid not betray.
The dilemma in the OP is a variant of that classic prisoners dilemma (as I said the only real difference is rather than the outcome being decided by everyone cooperating, its decided by the majority cooperating). So while it’s a really interesting nuanced thought experiment, its no more representative of real life than the classic prisoners dilemma.
In real life there basically no such thing as one off decision like that, there is always a history, and future decision with the same people. So its not remotely valid to use it as an analog for real life society
Because that’s literally how probability works. With 6 billion players it’s both completely impossible to predict any exact outcome and the second “cooperate” case (everyone picking red) is so unlikely as to be not worth considering.
Neither fact is true with 3 players. In fact “cooperate case B” (everyone chooses red) is far more likely (given it involves no one risking a significant chance of death) than “cooperate case A” (2 or 3 people choose blue). How is case B exploitive but case A is not?
Only if you believe that Nash’s work as the beginning and end of the prisoner’s dilemma problem, and ignore the next 50 years of research and exploration into different variations on the game. You can pretend Axelrod’s work never happened if you wish, and people often do when they want to rationalize selfishness, but the results are undeniable.
What law of probability would suggest that if a phenomenon has a 2/3rds probability in 30 million people, it would have some different probability in a sample of 3 people? Granted, in smaller samples, there’s a higher probability of outlier results in any given trial due to sampling effects, but they won’t be biased toward either side.
Maybe. But if the vehemence of the arguments on this thread are anything to go by, it’s also possible that if we given a few months to discuss amongst ourselves as a species the result would be several months of utter chaos that would leave significantly fewer people alive to be killed by the button overlords
I think you may be missing another big difference. That is, the difference between this and the Prisoner’s Dilemma is that the dilemma (in the Prisoner’s Dilemma) arises from the fact that while defecting is rational for each person, cooperation with your fellow prisoners yields a higher payoff for each. In the present scenario however, the rational decision for each person (pressing red) can also result in the highest payoff for the collective (100% survival). Unlike the Prisoner’s Dilemma, pushing red does not put others in a more dire predicament. The only thing affecting their fate is their own decision. It is only when people pick the irrational choice (blue button) does the collective payoff get reduced. It is therefore a false dilemma, masquerading as a real one.
Except Nash’s work was “the end” of the prisoners dilemma as it was defined. It wasn’t nice heart warming analogy it was a mathematical proof of the best outcome in the case of the prisoners dilemma, as a formally defined mathematical construct (with rational players with perfect knowledge, and no history or future). It just transpires that (despite what Nash and other libertarian thinkers postulated) the classic prisoners dilemma is not a very good model for real human society.
The model Axelrod and others came up with, of multiple prisoner dilemmas, where the participants can see the history of their opponents is a much better model for human society.
But that’s not what is being suggested in the OP. The red/blue dilemma as it’s described here is a classic one-off prisoner’s dilemma, with no history, just like the one Nash made proofs about. So interesting but it can’t be used to infer real life social outcomes.
Several people who have said they would press the blue button say they would do so even if they knew it would result in their deaths. So they are essentially saying they would commit suicide to infinitesimally help their desired outcome of nobody dying. This is a great sacrifice for them, but it infinitesimally moves the needle towards their desired outcome of preventing the death of others. And in the unlikely event that the majority ends up pressing blue, everyone is better off.
So how is that fundamentally different from taking singular, definitive action now: leave all their worldly goods to the poor and destitute (even if it leads to their death). It will be a great sacrifice for them, but it infinitesimally moves the needle towards their desired outcome of preventing the death of others. And in the unlikely event that a majority of the population also does this, we would end poverty and hunger in an instant and everyone would be better off.
In short, I’m pointing out that unless they already live their lives like a Gandhi-like figure, this blue button-pushing altruism is inconsistent with how most people currently live their lives.
Absolutely. Again thats how probability works. If there is a 2/3 chance of each result the expectation that all three people will get that result is a bit under 30%
If there are 6 billion people the expectation of them all getting the same result is effectively zero (literally so small that dice calculator runs out precision and rounds to zero)
But that’s not even the main reason the cases are different. With 6 billion you are purely dependent on probability you cannot meaningfully infer anything about individual choices. With three people you absolutely can. The chances of any of the three people actually risk death are insignificant, there is nothing exploitive about you choosing not risk death.
As I pointed out from the very beginning of this thread, my first preferred option in the original scenario would be to not press any buttons (red or blue).
So if I were to instead be given the choice between choosing red or doing nothing, choosing not to press the button is not inconsistent with my original preference in the first scenario.
Of course, there is one big difference: not pressing a button in the first scenario doesn’t necessarily result in one’s death, while it could in the second. But since pressing a button in this case can only result in people’s deaths and actively cause harm—while doing nothing is relatively neutral—I think most people (including myself) would choose not to press the red button in that case—because while most people have some self-interest, most people are also not narcissistic psychopaths.
Though that’s not inherent in the model Nash’s Equilibrium applies to. All that was is a table of outcomes, represented as numbers for good or bad, for the four cases (one betrays, all betray, all cooperate). The famous case was the one where all cooperate has a best outcome but one betray has a better outcome for the betrayer than all betray.
But all the cases have a mathematically provable Nash Equilibrium (where there is provably no better choice)
My next preferred option in the original scenario would be to press the red button, so if this were the case there is functionally no difference with respect to the original scenario (i.e. doing nothing or pressing red).
But as I mentioned upthread, an even more insidious twist would be adding a rule to the original scenario that anyone not pressing a button was guaranteed to die. In this scenario, I’d still pick red—because not picking any button is eliminated immediately (for me, anyway), and the remaining two options reduces to the original scenario (red or blue).
I think I would be burdened with even more survivor’s guilt, though.
Yup though I suspect the “no button” and “red button” combined would have a much bigger percentage IRL than just “red button” (or vice versa for blue)
Personally I read “is expected to press” as something like not pressing being equivalent to pressing blue but you’re not included in the total of blues, so there is no advantage to anyone. Of course as with all these it relies on the classic game theory assumption that everyone is rational and perfectly understands the repercussions of their choice. The fact that even in this case significant number of people wouldn’t press a button shows how bad that approximation is
In fact the “black button” case discussed above (where, if more than one person presses black button, anyone who presses the black button lives but anyone who doesn’t dies), is actually just the classic prisoners dilemma. The “all betray” case is 1.0, the “some betray” case is 1.0 for betrayers 0.0 for everyone else. Technically the “all cooperate” case (no one presses black) isn’t represented by the classic prisoners dilemma table but with 6 billion players that is effectively impossible so doesn’t effect the outcome.
Interestingly (and again showing the weakness of the “perfect rational players” assumption) I think the global outcome of the black button game is better than the red/blue one, even though the probability of all the non-black pressers dying is greater than the probability of all the non-red pressers dying. It will be obvious to almost everyone that not pressing black is a death sentence so almost everyone will press black
That’s not a single decision, but multiple decisions that would take an extreme amount of effort. Not a single button press, which is easy, and a conscious decision.
You asked, I answered. If you’re not calling the blue button pressers horrible people, then you wouldnt’ want them to be hypocrites. It thus follows you should welcome explanations that allow them not to be.
Selling everything to the poor is not a thing that’s actually easy to do. And, remember, these people aren’t dying after they do this: they have to keep on living. Dying is an out from the consequences of the choice, but in a way where they aren’t pulling the trigger – someone else is actually doing the killing, and doing so because they chose the altruistic option. And they have at least the belief they may not actually die.
Your argument that they are inconsistent is flawed because you’re only looking at it in a narrow way, and not the full reality. Button pressing is easy. Living with the consequences of perfect radical altruism is not.
And you don’t know what they have in fact willingly given up for others, either.
I think this is the problem with the entire experiment. I think people have a sense of right (altruistic) and wrong (selfish) and they are trying to look for those things in the two options. The part I struggle with is, unlike most hypothetical experiment examples, the selfish option in this one does not come at the expense of others. If everyone chooses the most selfish option, then everyone is just as well off as if everyone chooses the altruistic option. Normally that is not the case in these experiments. I think people are picking blue due to an assumption that this is like other experiments.
Imagine the same experiment but there is a counter on the wall that reads, “Number of people so far that have selected blue: 0." If zero people have selected blue, then no one is set to die. Are you going to be the first person to pick blue and therefore create a hazard that requires half the population to vote to save you or are you going to pick red so that no one has to die. The hazard is created by the first person to pick blue. Picking blue is only altruistic if someone before you already picked blue and, this is the part I am struggling with, absolution no one should ever be picking blue to begin with. Selecting blue kills people. If you don’t select blue, no one dies. So, who are these people selecting blue, causing death where no death needs to exist?
I brought up examples of babies and toddlers previously because I spend a lot of time with children of that age and they are the only people I regularly encounter who lack the wisdom and situational awareness to press a death button when it is clearly the better choice to press the other button. I struggle to understand how seemingly intelligent adults (as I assume everyone on this board would be) would be unable to see how people only die if someone presses blue so no one should be pressing blue. We keep hearing how 25 to 40% of people will press blue and I struggle to see how that many people can fail to understand the premise of the experiment. Even dumb people I know are intelligent enough not to make that mistake. No one is going to die unless someone presses blue. Are there really that many people that don’t understand the experiment and can’t help but make an unnecessary self-sacrifice?
This is not a case of the red button harming others. This is not even a case of the blue button definitely helping others (because 50% is required for the blue button to do anything). And the part I find most frustrating about the blue button pushers is they don’t seem to acknowledge how much danger they are putting everyone else in. The only reason anyone would even consider pushing the blue button is because we assume other people will have already pushed it. By pushing the blue button you are asking other people to risk their lives in order to save you. But the only reason you need to be saved is because you purposely put your life at risk.
It is rude at best (and absolutely not altruistic of me) to step off a cliff and ask for help from the people around me as I hang off the edge of the cliff with only my grip of the ledge to save me from death. They now need to risk their lives to save me and they would not need to do that if I had not purposely stepped off the cliff to begin with. I gave an example of a toddler pushing the blue button before a parent could stop him because it would take an accidental pushing of the blue button for the scenario to be at all justified. No one should be creating a hazard for those around them by pushing the blue button to begin with. The counter for how many people have pushed the blue button should always remain zero and we should recognize the first person to press the blue button is far from altruistic.
If that is your objection, what if we simplify it further and simply consider committing suicide? It’s as easy as pulling the trigger of a gun or taking an overdose of sleeping pills.
In an overcrowded world, each additional person contributes to the immiseration of others. Committing suicide would alleviate that (at great personal sacrifice). Recall that following the Black Death in Western Europe, the standard of living markedly increased due to labor shortages, and serfdom was eliminated in many places.