If there’s someone who has 4 chances at doing something, they have a 100% better odds than if they had 2 chances, right? What percentage is it better by if they had had 3 chances, or 1 chance?
I’m no good at this sort of thing.
If there’s someone who has 4 chances at doing something, they have a 100% better odds than if they had 2 chances, right? What percentage is it better by if they had had 3 chances, or 1 chance?
I’m no good at this sort of thing.
It depends. If each chance is mutually exclusive then your chances are proportional to the number of goes you have at it. For instance, you are twice as likely to turn up the Ace of Spades if you deal yourself two cards than if you deal yourself one, and if you deal yourself all 52 cards then it’s guaranteed. But if the chances aren’t mutually exclusive then the math is harder. For instance, you have a 50% shot of throwing a head with one toss of a fair coin, but you don’t double your chances (to 100%) by tossing two coins: there are four equally likely possibilities, Hh, Th, Ht and Tt, and one of them still leaves you without a head.
(You do, however, halve your chances of throwing no heads by tossing an extra coin - from a 50% chance of no heads to a 25% chance of no heads - and halve it again with yet another toss and so on. But there is still a finite though small chance of never throwing a head.)
It depends.
Pick a letter from a, b, c, d, e or f.
Is it a? Probability = 1 in 6 = 16.66%
Is it b? Probability, well, there are six possible answers and now we’ve chosen two of them. Probability = 2 in 6 = = 33.33%
Is it a c? 3 in 6 = 50%
If you’ve guessed wrong five times the chance that it’s f = 100%
roll a dice and everything changes
let’s try and roll a 1
1st roll - there’s a one in six chance = 16.66%
2nd roll - there’s a one in six chance = 16.66%
So if you add them up the chances are 33.33% right?
Wrong
The chances of screwing up the first time are 5 in 6
The chances of screwing up the second time are 5 in 6
If you do the do the maths, then there are 25 ways of srewing up, 10 ways of rolling a single 1 and one way double rolling two 1s.
That works out to 30.55%
Roll the dice three times and the probability of a 1 rises to 42%
Ok, I understand what you’re saying. However, I do not need to know what are the odds when it’s 1 in 52, or 6. I need to know how much better the odds get when it’s 1 chance, 2 chances, 3 chances or 4.
All things being equal, and in the best possible circumstances, 4 chances is 100% better than 2. I understand that if we were talking about the lottery a 100% increase in chances still means that the chances of winning are very slim. That’s not what I’m talking about. I just need to know what percentage increase in chance there is if I have 4 chances rather than if I have 3 chances or 1 chance.
Just to expand a bit on the above, the key is whether the chances are independent of each other or not.
The dice are independent (each has a 1 in 6 chance of showing a 1); pulling cards sequentially from a pack is not (the first card has a 1 in 52 chance of being the ace of spades, and the second one has a 1 in 51 chance if the first card isn’t).
When the chances are independent you are correct; when they’re not it gets complicated.
Incidentally, TD’s example with the letters is mistaken. The first chance is indeed 1 in 6, but because we now know that a is not the right answer we have a 1 in 5 chance. Then a 1 in 4 chance when b is not the right answer, then a 1 in 3, then a 1 in 2.
Probability can get rather complicated!
Try starting here.
Someone should probably check my maths, but, by my count…
If the chance of succeeding at something is 1/n on a given attempt, then the chances of succeeding at least once after p attempts is 1 - (1-1/n)^p.
My reasoning is that if the chance of success is 1/n, then the chance of failure is (1-1/n). To say you succeed at least once out of p attempts is the same as saying you don’t fail every single time. The chances of failing every time is (1-1/n)^p, so, the chance of succeeding once is 1 - (1-1/n)^p.
So, for your example, if we ignore what n is, if p goes from 2 to 4, the probability is going from (1-(1-1/n)^2) to (1-(1-1/n)^4), giving us an increase in chance of success of (1-1/n)^2 - so indeed, the percentage increase you get in odds depends on the chance of success on a single attempt.
Concrete example: You are trying to roll a six on a six-sided die. This means n is 6. So the chance of succeeding after two attempts is given by the above formula as 0.3056. If we set p to four (ie the chances of rolling at least one six on four dice), our chances go up to 0.5177. This is a 69% increase - but if we were talking about an eight-sided die and trying to roll an eight, going from two rolls to four rolls wouldn’t yield a 69% increase - it’d be something else. So there’s no simple answer to your question - the answer depends on how easy the task being attempted is.
In general, if you have a task with an n chance of success on one attempt, going from a attempts to b attempts will yield a percentage chance increase of (1-1/n)^(b-a) * 100. I think. I dunno, someone back me up, here?
ETA: Oh, yeah, this is all assuming the tasks are independent.
Math error, will repost
Bugger it. When I said:
What I of course meant was:
In general, if you have a task with a 1 in n chance of success on one attempt, going from a attempts to b attempts will yield a percentage chance increase of (1-1/n)^(b-a) * 100.
I… think that’s right. Though I’m growing less and less confident in my maths.
The general way to solve this isn’t too difficult to express, but might be tough to understand depending on how much math you know.
It’s important to distinguish that each chance as you’ve called them is something that has a probability associated with it. What exact value it is depends on what your chances apply to. For example, 1 in 6 for dice throws, or 1 in 2 for coin tosses. Written as a fraction (i.e. 1/6 or 1/2), call this value p.
The one big assumption I’ll make is that for each chance, that probability p is the same. For example, dice throws: each chance you have to throw the dice, the odds of rolling a 6 stay the same. The term for this is that each chance is independent. Independence lets us calculate the probability of separate outcomes by simply multiplying them together (see below). If independence doesn’t apply (as in pulling cards off a single deck, for instance), you’ll need to know more about the problem to define a ratio for chances.
Working with the assumption of independent chances, the probability that you do succeed in x chances is 1 - the probability that you do not fail in all x chances. Because if you do not fail, then you must succeed (or so they say).
It’s pretty easy to express that. The probability of failure is (1-p). Each time we fail, we multiply by that value. If we fail all x times that gives us:
(1-p)[sup]x[/sup] (the failure value multiplied by itself x times).
Then subtract that from 1 to give us the chance that we succeed:
1-(1-p)[sup]x[/sup]
If you want to know the ratio between two numbers of chances, then you divide one by the other. Let’s consider one value of x to be c1 chances (4 in the first case given) and the other to be c2 (2 in the OP).
This is expressed as:
1-(1-p)[sup]c2[/sup]
Keep in mind that p is restricted to the range (0,1). Technically you could have p be 0 - it would mean that no matter how many chances you got, you’d never win. Similarly, if p is 1, you get no improvement from more chances, because you only need 1 chance to win. Other values don’t make sense for probability.
Since the top and bottom are so similar, and we exclude 0 as a value for p, this formula can be written as an expression involving c1-c2 as the highest power. (That may or may not mean much to you, but it’s useful when visualizing the graph. 2 chances to 1, for instance, is just a straight line).
I can’t really show it here, but graphing this will show it best. What happens is this: The highest ratio you can get is arbitrarily close to the ratio of the chances (which is 4/2 in your first problem, when you proposed the solution (100%). This actually only happens when your probability p gets arbitrarily close to 0.
As p increases, the probability ratio smoothly moves toward 1 (whether the ‘base ratio’ of chances was greater than or less than 1).
Hopefully it’s clear why it approaches 1. When p is really close to 1, that means you have a really good probability to succeed on any given chance. So the ratio isn’t as important; it’s more important how many chances you got. In fact, as you increase the total number of chances, the curve will flatten and get quite close to 1 for lower and lower values of p. For instance, if you compare 20 chances to 40 chances, p doesn’t need to be very big (only about 1/4) before it becomes nearly as likely that you win in 20 times as within 40 times. This is the case even though you’re dealing with the same ratio (4/2) as in the first problem.
The bottom line: those extra chances won’t help you, unless you really need them (i.e. when p is low). If I were more awake I could probably give an estimate of a value for p where the odds are essentially equal, assuming a decent number of chances (not the low numbers in the OP, though).
@ Francesca: It still depends on whether each try is independent.
Assume I show you four cards, face down, and ask you to guess the once which is an ace.
If you pick up each card you guess and keep it, the tries are not independent; taking 4 tries has 100% chance of success since you can check all four cards. 1 try has 25% probability of success, 2 tries has 50%, 3 tries has 75%.
On the other hand, if after each guess I shuffle the 4 cards and lay them down again, so you don’t know which ones you’ve already checked, the tries become independent; 1 try still has 25% probability, but 2 tries has 44%, 3 tries 58%, 4 tries 68%.
If you are flipping a coin or rolling a die, each try is independent and it doesn’t matter how many tries you’ve had before. (If the system is not independent but the pool of “things to be tested” is large, like a state lottery, you can treat it like it was independent.)
If the system is independent and the probability of success is small, then the probability of success after multiple tries becomes essentially the probability per try times the number of tries. So buying two lottery tickets doubles your chances, three triples it, etc.
If we take 2 chances out of x as your baseline, then 4 chances is 100% better, 3 chances is 50% better, and 1 chance is 50% worse. This is because 4 is 100% more than 2, 3 is 50% more than 2, and 1 is 50% less than 2. I’m just surprised that this should be causing you any difficulty at all. As stated, this only applies if all your chances are mutually exclusive (drawing tickets for a lottery, say, where there is only one winning ticket out of however many are in the pot).
You’re surprised that this should be causing me any difficulty? Thanks very much.
I understand that 3 chances is 50% better than 2 and 1 chance is 50% worse. What I don’t understand is how much worse 3 chances is than 4.
3 chances is 3/4 as good as 4 chances. 3 chances is 75% as likely to succeed as 4 chances. Or alternately, 4 chances is 33% more likely to succeed than 3 chances.
Thanks guys, I appreciate the help. However, I’m sorry, but when you start saying P * (1 - P), you lose me.
Perhaps it would help if I told what I’m actually trying to work out:
I have a friend who is about to begin IVF. She has 4 follicles from which they can take eggs. This is not very many. I am trying to cheer her up by saying “it’s <this much> better than if you only had 1/2/3 follicles”.
I realise that there are a huge number of variables when it comes to IVF. I realise it’s complicated. I was just trying to simplify it and say something positive.
Ok. So 4 chances is 33% better than 3 chances, 100% better than 2 chances and … 133% better than 1 chance?
No, 4 chances is 4 times more than one chance; 300% “better”.
But 4 times 1 is 4! Why is it not 400% then?
I can speak Latin and Ancient Greek you know
Well, you can express the difference as the straight ratio between the two options, or as the “extra benefit” over the lesser option. If something is “twice as large”, that’s 200% of the original, or 100% extra. If something is “five times as large”, that’s 500% of the original, or 400% extra.
As the ratio, 4 chances is 133% of 3 chances, 200% of 2 chances, or 400% of 1 chance. As the “extra benefit”, 4 chances is 33% of 3 chances, 100% of 2 chances, or 300% of 1 chance.
Sorry. I think you were making the question harder than necessary by talking about chances, when what you’re actually having difficulty with is “How much bigger a percentage is 4 than 3?” or similar. And yes, I was surprised, because from previous posting of yours that I’ve seen I naturally assumed you’d be kicking ass and taking names over simple arithmetic.
So, to elaborate: 4 is 400% of 1, but it’s 300% more than 1, because 1 is already 100% of itself. And if ever you find yourself floundering over “percentage of” something, just remember that “per cent” means “by 100” (as you know, what with the Latin) which further means means “*divided *by 100”, and “of” (as in “I’ll have three of them”) just means “multiplied by” in this context.
Which means that “400% of 1 is 4” just means “400 divided by 100, times 1, is 4”.
So having 4 chances is 100% better than 2 (take your 2, you have to add another 100% of it to get to 4). It’s 33% better than 3 (take your 3, you have to add another 33% of it to get to 4). It’s 300% better than 1 (take your 1, you have to add another 300% of it to get to 4).
I hope that’s more helpful than me saying Fancy not knowing that.
Footnote: And it’s not “symmetrical”. To get from 1 to 4 you have to add 300% of 1. But to get from 4 back to 1 you only have to lose 75% of 4. And, of course, having 4 chances, or any chances, is infinitely better than having no chances.