What are the chances . . .

[hijack]
Isn’t probability fun? I am not going to demand my interpretation of one of the previous solutions is correct. Instead, please excuse me, I am going to hijack the thread in hopes of shedding a little light on this GQ (and a lot of the other probability GQs).

There is a tricky concept that always comes up in these types of probability questions. With the exception of zigaretten’s friend, most people do not have trouble calculating the probability of a single, simple event like rolling a die or flipping a coin. The trouble starts when the problem includes multiple steps (i.e. several die rolls or several die flips). As each step is taken and more information gained, the probability of the final result changes.

For example, what is the probability of flipping a coin twice and getting heads both times? Before the test is even started, there are 4 possible outcomes, one of them the success case.
Case 1: H-H success
Case 2: H-T
Case 3: T-H
Case 4: T-T
So the probability is 25%.

Okay, so now you flip the first coin and it is a head. What is the probability? There are two remaining outcomes, one of them success:
Case 1: (H)-H success
Case 2: (H)-T
So 50%.

Now, what if the first flip was tails?
Case 1: (T)-H
Case 2: (T)-T
Obviously, 0%.

See, after the first step, the first flip is a fact; the probability of the first flip no longer matters. It doesn’t matter how unlikely it was to happen, all that we know is - it happened.

People tend to differ on the solution to these problems because they are calculating the probability from different steps. The trick is to figure out exactly what step the question is asking about. Then figure out which outcomes are fixed-facts and therefore do not enter into the probability calculation.

I have seen a bunch of solutions here and the funny thing is all of them are correct. However, not all of them are exactly answering the OP. So when you are arguing for your solution, don’t keep checking to see if your solution makes sense, check to see if your solution is to the exact problem posed in the OP.
[/hijack]

You’re past taking the coin out, so there are no longer 6 possible outcomes. zigaretten describes three remaining possibilities as:

(1) B R
(2) B-1 B-2
(3) B-2 B-1

But, as (2) and (3) are indistinguishable. The question asked is, “What is the probability that the bottom is blue also?” Not, “Which blue side is on the bottom?”

So there are at that point only two remaining possibilities.

bup said:

And Cabbage said:

And that does not enter into the calculation as you’re past that point at the time in question.

You’re chances of flipping a coin four times and getting heads each time are 1/16. If you’ve already flipped it twice, your chances of getting heads on the remaining two flips are 1/4 regardless of what happened on the first two flips.

Doc Cathode said:

That wasn’t a mistake. You were right. The question is not, “What are the chances that you drew the blue/red and put it on the table blue side up?”

The first part should read:

You’re past taking the coin out, so there are no longer 6 possible outcomes. zigaretten describes three remaining possibilities as:

(1) B R
(2) B-1 B-2
(3) B-2 B-1

But, as (2) and (3) are indistinguishable, you only have two possible outcomes relevant to the question. The question asked is, “What is the probability that the bottom is blue also?” Not, “Which blue side is on the bottom?”

There was a thread once on a similar question that turned surprisingly ugly. I refuse to get involved this time, even though I know I’m right. :smiley:

Just because two events are indistuingishable does not mean that they are the same event.

It doesn’t matter if they’re the same event - either one satisfies the condition of being blue.

No. Each event in this case has 1/3 chance of occuring, and 2 of them satisfy the condition. They’re mutually exclusive, so the probability of either one happening is 2/3.

I’m going to write a program to simulate this, let it run for about a million trials, and post the results (and code). Let’s see what happens there.

I’m a swinging voter and have been swayed both ways whilst reading the above posts. I have come to agree with the 66%. It has been said that what has already happened is irrelevent but it’s not really. The question (as I understand it) is

“you are looking at a blue side, there are two coins with blue sides, which of these coins is the one you are looking at?”.

Now because the blue/blue coin has blue on both sides it is more likely that you are looking at this coin. If the blue/red coin was weighted so that it would always have blue side up then the chances of the other side being blue would be 1/1. This is not the case though, sometimes when the blue/red coin is chosen it comes up red and so that event is discarded which lowers the probability that you are looking at the blue/red coin.

The simple answer (I hope):

Think of it this way–there are a grand total of three blue sides, and it should be clear that each of them are equally likely to come up.

One of the three has, in fact, come up. Exactly two of the blue sides are opposite another blue side (2 out of 3). The remaining blue side is opposite red (1 out of 3).

Therefore, probability the other side is blue=2/3; probability the other side is red=1/3.

OK, the simulation I ran agrees with the 2/3 answer. In half the instances (500,528), the coin came up blue. In two thirds of those cases (.666372), the other side of the coin came up blue as well.

The probability of this happening if the chances are really 50/50 is very small.

For those who are curious, here’s my code:


#include <iostream>
#include <stdlib.h>
#include "coin.h"
using namespace std;

int main()
{
	srand(123);

	Coin* Box = new Coin[3];
	Box[0].SetBottom(blue);
	Box[0].SetTop(blue);
	Box[1].SetBottom(blue);
	Box[1].SetTop(red);
	Box[2].SetBottom(red);
	Box[2].SetTop(red);

	int n = 1000000;
	float m = 0;
	int success = 0;
	int failure = 0;
	int loc = 0;

	for (int j = 0; j < n; ++j)
	{
		loc = rand() % 3;
		if (rand() % 2)
		{
			Box[loc].Flip();
		}
		if (Box[loc].GetTop() == blue)
		{
			++m;
			if (Box[loc].GetBottom() == blue)
			{
				++success;
			}
			else
			{
				++failure;
			}
		}
	}

	float p = success / m;
	float q = failure / m;

	cout << "m: " << m << "
p: " << p << "
q: " << q << '
';

	delete[] Box;
	return 0;
}

And here’s the coin class:


#ifndef COIN_H
#define COIN_H

enum color {red, blue};

class Coin
{
	public:
		Coin(color top = red, color bottom = red);
		color GetTop();
		color GetBottom();
		void SetTop(color top);
		void SetBottom(color bottom);
		void Flip();
	private:
		color _top;
		color _bottom;
};

#endif

#include "coin.h"

Coin::Coin(color top, color bottom)
{
	_top = top;
	_bottom = bottom;
}

void Coin::Flip()
{
	color temp = _bottom;
	_bottom = _top;
	_top = temp;
}

color Coin::GetBottom()
{
	return _bottom;
}

color Coin::GetTop()
{
	return _top;
}

void Coin::SetBottom(color bottom)
{
	_bottom = bottom;
}

void Coin::SetTop(color top)
{
	_top = top;
}

Of course, one could argue that my choice of seed affects the distribution. Such person is advised to try any seed they like. :slight_smile:

2/3 is the correct answer.
Here’s another way to get there:
Having picked a coin with a blue face up, you are left with three faces of unknown color. If you use a jewelers saw to split the two coins in half, you can discard the blue face you’ve already chosen, and are left with just the 3 faces that could still be found (plus new shiny metal faces which are irrelevant). Two of the faces remaining are blue, and one is red, so if you mix them up and choose one you have a 2/3 chance of getting a blue face.

Sawing the coins in half is legal because the color of one face of a coin is completely independent of the color of the other face of the coin.

OK, I got curious, and tried to calculate the odds of getting my results if the two outcomes were equally likely. I had to scale the number of trials back to 300 (it doesn’t like 500528! for some reason ;)). Anyway the probability of getting 200 or more successes in 300 trials when the expected number is 150 turns out to be .4007437608e-8. With more trials, that only gets smaller. Of course, this doesn’t rule out the possibility that I got unlucky, but I doubt it.

This was done using a binomial (n, 1/2) distribution and calculating P(X >= 2/3*n).

Uh, ultrafilter, the OP clearly stated that the coins were in a jar, not a box… :slight_smile:

The question, restated, is:

You have two coins (the all red one is irrelevant). One is blue on one side and red on the other. The other is blue on both sides. You have one coin visible on a table, with a blue side showing. The other coin is not visible. What is the probability that the bottom of the coin on the table is blue also?

So the divergent paths being pursued here seem to be that either:

A.) There are only two possible outcomes:

1.) It’s red

2.) It’s blue

1/2

B.) If you number the sides as r1, b1, b2 and b3 and say that b1 is showing, then three possible outcomes as to which side is down on the table are:

1.) r1

2.) b2

3.) b3

There is a 1/3 chance that any one of those will come up and 2 of the three are blue.

2/3

C.) There are two coins, A (r/b) and B (b/b). you have no way of knowing which, but turning it over will reveal that it is either:

1.) A

2.) B

1/2

The basic probability is number of favorable outcomes/number of possible outcomes and the spin we’ve got going (which is, of course, why people come up with these things) comes from deciding what the number of possible outcomes should be. So, if a coin has been drawn and displayed, what are the conditional possibilities?

So, and I am 25 years past my one and only stat course, if we have two coins, as described above, and were going to see one side first and then another, would we have:

blues/# possibles * # remaining blues/ # remaining possibles = probability blue

3/4 * 2/3 = .5 probability blue?

Yeah, still workin’ on the booger in that.

Remember how they used to thin the herd with word problems?

Okay, number of possible outcomes v. number of coin sides yet to be viewed - crux o’ the biscuit, methinks.

I’m not up for statistics death-o-rama, but I’m having a hard time getting away from 1/2, even though I appreciate some (not all) of the 2/3s guys thoughts.

Like I said, I think my simulation pretty conclusively demonstrates that it’s not 1/2. That’s the hell of probability–it’s counterintuitive like you wouldn’t believe. It takes quite some time to get used to it.

beatle, how about this:

Say we change the problem a little. Say somebody puts a little black dot on the blue side of the B/R coin, and also puts a little black dot on one side of the all blue coin.

Now, let’s change the problem to, “The face of the coin shows blue with a little black dot. What’s the probability of the other side being blue?”. It seems clear to me here that it’s 50/50.

Now, same set up (i.e., we still have the little black dots). Instead, now we ask, “The face of the coin shows blue. It may or may not have a little black dot. What’s the probability of the other side being blue (with or without a little black dot)?”

Now, that little black dot serves no purpose–we’re completely ignoring it. This question is therefore identical to the OP. But now (compared to my previous problem) there’s an extra blue face (the face w/o a little black dot) that can come up and satisfy the hypothesis of the question (“the face of the coin shows blue, with or without a little black dot”). Does this persuade you that now the chance of the other side being blue is greater than 50/50 (as it was in my previous problem)?

There is certainly no math I can add to this, only perspective.
Regardless of what the chance were before you pulled a coin out and slapped it down, they have been nullified because you did.
The OP does not ask what are the chances of blue side ‘x’ coming up, only the chances of a (any) blue side coming up.
The million sided cube solution theory worked for the Monty Hall problem (given how it was interpreted) because you were dealing with the # of cards, not # of faces (the backs of the cards being unable to eliminate, for the viewer, a particular card). In this case the ‘back’ of the coin showing eliminates the red/red. The OP is past that part of the equation.
The point at which the OP poses the question, there can be only two possibilities. Each having the same statistical possibility.

I just realized my last sentence was poorly worded. What I was trying to say is, “Does this persuade you that now the chance of the other side also being blue is greater than 50/50 since there’s now an extra blue face to consider (compared to my previous problem), and the chance there was 50/50”?