Selecting randomly from five options, using only a coin

That method gives the first object a one-half chance of being eliminated in the first round. That sounds wrong to me, but I haven’t thought it through completely.

-FrL-

True, but it also gives every object the 1/2 a chance of being alimiated in the 1st round also.

First round:
1 - h
2 - t
3 - t
4 - h
5 - h

Objects 2,3 are elimiated, 1,4,5 are still in

Round 2…

Just to add, you don’t just keep cycling till there is one left, if you start a round you have to complete it, and if all objects happen to be elimiated in a round then round can’t count.

BTW, for the technique I would like to refer to as the ElfNasty ShagBabe Method :), there’s an easy way to reduce the likelihood of having to redo your trial: find a multiple of your number of outcomes which is closer to a power of two number. E.g. if you have 5 outcomes and three flips, you have a 3/8 chance that you’ll hit an unclaimed “raffle ticket” and have to do it over, but if you do five flips and assign each outcome six numbers, you only have a 2/32 chance of having to start over.

That makes it make more sense to me.

-FrL-

There is a technique to do this, but I don’t know if you’d call it simple. I don’t remember all the details, but it goes something like this.

Flip a coin some number of times.

Code the pattern of heads/tails as a binary decimal and convert it to a base-10 decimal.

In the OP’s case, if the decimal is 0 thru .2, you’ve chosen option 1.
.2 thru .4, option 2.

Etc.

Hit enter too soon.

Alternatively, code the 5 probabilities beforehand into binary decimals and start flipping. When the pattern puts you between two adjacent probabilities, you’re through.

The problem with your system, rowrbazzle, is that unless your number of choices is a power of two or a factor of a power of two, one or more of the outcomes will be overrepresented. E.g. if you have five choices, then the probability of 2 of them will be 1/8, and the probability of 3 of them will be 1/4.

Wow. Am I chopped liver. :frowning:

Sorry, somehow I discounted yours because I read it as you assigning an option to each of five different outcomes of four coin flips and ignoring the rest (I didn’t catch that each line had four seperate outcomes on it). Now I see that your suggestion is exactly the same as mine, just not specified in a general way. I dub this the Zut Method.

er, three

If we generalize this method to flip the coin N times in each round, the question becomes: what value of N is best, in terms of smallest number of expected coin flips?

If we flip the coin N times per round, there are 2^N possible outcomes, and the number of “left over” outcomes (that will cause us to do another round), are M = 2^N mod 5

The probability of picking one of these “left over” outcomes, and thus having to do one more round, is p = M/2^N

So, the probability of finishing on the K^th round is (1-p)*p^(K-1)
This means that the expected number of coin flips before we finish is



E[coin flips] = Sum    (Prob of exactly K rounds) * (Num coin flips in K rounds)
              = Sum_{K=1 to Infinity}    (p^(K-1)*(1-p)) * (N*K)
              = N*(1-p)*Sum_{K=1 to Infinity}  p^(K-1)*K
              = N*(1-p)*1/(1-p)^2
              = N/(1-p)


So, the expected number of coin flips is N/(1-p), which can be written explicitly as



                              N
E[coin flips] = -------------------------
                           2^N mod 5
                  1  -  ------------------
                              2^N


Plugging in N=3, 4, 5, etc, we get



       N        E[coin flips]
------------------------------
       3         4.8000
       4         4.2667
       5         5.3333
       6         6.4000
       7         7.1680
       8         8.0314
       9         9.0353
      10        10.0392


So, the minimum is at N = 4.

Therefore, it is best to flip your coin four times per round, which will result in an expected number of coin flips equal to 4.27

Actually you can do it in one flip.

Assign each of the options a block of compass headings of 72 degrees: 0-72, 72-144, 144-216, etc. Draw an arrow on the coin and flip. You then read the compass heading of the arrow and see which of the five blocks of headings it corrpesponds to.

This is only partly in jest… :wink:

Any even number can in principle be made into a die. If it’s a multiple of 4, then you make a bipyramid, like the d8. If it’s not, then you can make, as you describe it, a spindle shape like the d10. But for much more than 10, the faces of such a die start to get impractical, since they’re so long and skinny.

On the original question, I can do better than the 4-coin method. First, flip three coins to get a number from 1 to 8. If the result is 1-5, you’re done. If it’s 6, 7, or 8, though, you don’t just start over, since that would be wasting randomness. Subtract 5, and you have a number from 1 to 3. Flip another coin, and now you have 1-6. Now, if the result is 1-5, take it, and if it’s a 6, only then do you start over from scratch. I calculate that this method would average only 3.6 flips.

Damn, I was just coming here to post something similar.

The way I thought about it was that with three coin flips:

  • If you get the 1-5, you’re done.
  • If you get 6, you will do a face-off between choices 1 and 2, with one more coin flip
  • If you get 7, you will do a face-off between choices 3 and 4, with one more coin flip
  • If you get 8, you will do a face-off between choices 5 and X, with one more coin flip

If you get 8 and in the face-off X wins, you start again with three coin flips.

This is, in effect, an analogous method to Chronos’ proposal.

By my calculations, the expected number of coin flips is 3.6178

I wonder if one can do better than this.

You can make practical rolling-pin style dice of pretty much any size.

Huh, I got 18/5, or exactly 3.6, in my calculation. I figured as follows:

There is a 5/8 chance of needing exactly 3 flips, and a 3/8 chance of needing more.
If you need more, there’s a 5/6 chance of needing exactly 4, and a 1/6 chance (or 1/16 of the total) of having to start over.
If you have to start over, then at that point your expected number of additional flips is exactly equal to the expected number at the beginning, for a total of 4 plus that.

So if n is the expected number of flips, then we have:

n = (5/8)3 + (3/8)(5/6)4 + (1/16)(n + 4).

Solving that for n, I get n = 18/5. What was your method?

And (:3=, I’m not sure exactly what that link was supposed to be, but whatever it was, it didn’t work. I think I can picture the type of device you’re talking about, though: It’d just be an n-sided, almost cylindrical, prism, perhaps with some sort of convex endcaps so the ends wouldn’t be stable, I imagine? Even for those, for large numbers, it might be difficult to tell exactly which face is on top.

I agree with you.

After I posted, I realized I made a miscalculation, and that the value was 3.6, just like your value.

I’m still curious if we can beat 3.6 coin flips.

I don’t think 3.6 can be beat. This isn’t really a proper proof, though.

For the probabilities to be equal, none of them can be greater than 1/5. 1/5 in binary is 0.00110011… as a repeating binary fraction.

A first approximation is 0.0011 , or 3/16 for each number. We know the minimum number for any selection is three flips. We can assign 2/16 to our numbers to give them all an equal chance of getting three flips. The remaining 5/16 to be ‘assigned’ will occur in 4 flips.

Our repeating sequence is 4 digits long; thus the closest approximation to 1/5 will also repeat every four flips. We repeat our method in this way, thus each time through = 0.0001 or 1/16 times the same 3/16 approximation.
I would hypothesize that the best solution (i.e. lowest expected number of flips) will be one that approximates the fraction most closely. The simple ‘three-flips, repeat’ is using 0.001… or 1/7 and clearly is not ideal.

I also think that the best case expectation can be calculated from the repeating fraction. It seems that, reading right to left, any 1 in the repeating number will be an indicator of when choices can be discriminated (for all choices). The length of this number will be how long until repetition is necessary. Since this method of repeating yields the closest approximation to the continued fraction the longer it takes, this method will be optimal, if my first assumption is correct.

The equation for n choices, with a repeated fraction (1/n converted to binary) of length l is then :

E = (E+l)/2[sup]l[/sup] + n * ( SUM (k=1->l) : k*I(k)/(2[sup]k[/sup] ) )

(where I(k) = the kth digit of the repeating number, from right to left)

Using this case as an example:

We have 1/5, which is 0011 repeating. The equation would be:
E = 1/2[sup]4[/sup] (E + 4) + 5( 3/2[sup]3[/sup] + 4/2[sup]4[/sup] )

which matches the above.

If we had 9 choices, the binary equivalent is 0.000111000111…
My guess for minimum expectation is :

E = ( E + 6)/2[sup]6[/sup] + 9 * ( 4/2[sup]4[/sup] + 5/2[sup]5[/sup] + 6/2[sup]6[/sup] )

or 4 2/3

I would approach it more from an information theory point of view… At no time am I throwing away perfectly good random information, and at no time do I produce more random information than I’m certain I need. So there’s no way to make the method more efficient.

For comparison, if we wanted 9 choices, my method would be:
Throw 4 coins. If the result is 1-9, keep it. Otherwise we have 7 possible results.
Throw another coin, giving us 14 results. If the result is 1-9, keep it, otherwise we have 5 results.
Throw one more coin, giving us 10 results. If the result is 1-9, keep it, otherwise start over from scratch.
This would give a 9/16 chance of needing 4 flips, a 9/32 chance of needing 5, a 9/64 chance of needing 6, and a 1/64 chance of needing more.

Calculating that all out, I do indeed get that for 9 choices, an average of 4 2/3 flips would be needed. So it looks like our analyses agree.