A) you choose a number and roll a six-sided dice up to 12 times. each time it comes up you get $10, if it does not come up at all in 12 consecutive tries you lose $100.
B) you choose a number and roll a sixty-sided dice for 120 times. each time it comes up you lose $100, if it does not come up at all in 120 non-consecutive tries you win $100.
If my mathematics and logic is correct, no they are not the same. In (A), your expected winnings would be 12 x 10 x 1/6 = 20, while your expected loss would be 100 x 5/6[sup]12[/sup] = 11.22. Net winnings: 8.78.
In (B), your expected loss would be 120 x 100 x 1/60 = 200, while your expected winnings would be 100 x 59/60[sup]120[/sup] = 13.31. Net winnings: -186.69.
If you meant to say $10 in the first part of (B), then the expected loss per roll is the same as the expected win per roll in (A). If not, then it seems intuitively apparent that (B) is a worse proposition.
[edit]
Another point is that the maximum possible wins and losses in the two cases are clearly different. E.g. maximum possible win in (A) is 120, while in (B) it’s 100.
A) say you roll 20 times and your number only comes up once on the 8th try. on the 8th try you’d win $10, it resets back to zero and you’d lose $100 on the 20th (12 consecutive) try. for a total of -$90. had it come up twice, say 8th and 20th try, you’d win a total of $20. you’d have to play at least 12 times if there are no wins.
B) each win is about 100/120 about $0.83. each loss $100.
Roughly speaking, B can be looked at this way: If you play continuously, you always get $100 for approximately 120 throws. But the ratio of losses is about 2 for every 120, so you lose $200 and thus net -$100 each time.
More precisely, B can be treated as a negative binomial distribution. Let the failures r = 120, and p = 1/60. The expectation (mean) for number of ‘successes’ (which in this case equal hits on our number) is pr/(1-p), or 2.03. In other words, you will lose $203 for every $100 you make. The net loss for the game is $103.
For A, we can recast it as a different, simpler game A’ : If X occurs, you lose $100. Otherwise, you win $10. [This is practically speaking identical to A as stated in post #3, excluding that you get to pick a number. X is in this case “12 misses in a row”.]
X has probability p =5/6^12; the probability of winning is 1 less that.
A’ has expectation -$100p + $10(1-p). This game yields a net loss of $2.33 .
The reason they’re not the same is the variability in the number of wins scales as the square root of the number of trials, while you’re adjusting by the actual number.
is B the same as C) you choose a number and roll a six-sided dice for 12 times. each time it comes up you lose $10, if it does not come up at all in 12 non-consecutive tries you win $10? B and C is the same right?
A and B are both bad, and B is much worse because you lose twice as much as you can win. yet intuitively, B seems to be better when viewed as A vs C, in the sense that “avoiding” a number is easier than “hitting” a number. is there a simple way to explain to a non-english speaking person that, while they are both bad, B is much, much worse?