Tough Math and Statistics question

Okay.

I have this problem. I have two sets of experiments.

Each have a variable number of trials. For this experiment, lets say each experiment has 3000 trials.

Each trial is a simple Yes/No test, with an average yes of about 1%, altho it can vary from 0% to 25% (Exceedingly rare)

Here’s what I need to know.

The first experiment is the “control”.

The second experiment has a change.

I need to know if the sample size and response rate was enough to consider the change to be statistically significant. (Say, 95%).
In other words, if Experiment 1 has 1000 trials and 10 Yes’s (1%) and Experiment 2 has 2000 trials and has 25 yes’s (1.5%) response, how likely is what I’m doing in Experiment #2 better than experiment #1? How likely is it just a “random Event”?

Thanks!

Phil

This isn’t so much a “tough” problem as it is a “basic homework for an introductory stats class” problem. If you can get ahold of any general statistics textbook, it’ll tell you how to do this. I don’t have my book with me, so I can’t give you the formula, but if no one has answered by the time I get home, I’ll find it and post it.

PhilAlex - if you were in my intro to psych stats class, and I were to see this on the SDMB I’d probably think you were not doing the reading…alas, you are not in my intro to psych stats class, - and I’m teaching methods this semester anyway…but be sure to brush up on your Chi Square before posting fundamental stats problems here :wink:

OK, so you’ve got two sample sizes, n[sub]1[/sub] and n[sub]2[/sub]. Let p[sub]1[/sub] and p[sub]2[/sub] be the proportions of “yes” answers in the respective populations. You’re trying to decide whether |p[sub]1[/sub] - p[sub]2[/sub]| is significantly different from 0.

The test statistic is |p[sub]1[/sub] - p[sub]2[/sub]|, and the standard error is sqrt(p[sub]1[/sub](1 - p[sub]1[/sub])/n[sub]1[/sub] + p[sub]2[/sub](1 - p[sub]2[/sub])/n[sub]2[/sub]). In short, the more standard errors you are away from 0, the more likely there’s a significant difference.

You state that on average “yes” answers for the control group happen 1% of the time. The equation for the standard deviation is the square root of the variance:

sqrt(E(x - E(x))[sup]2[/sup])

where E(k) is the expected value of k, which is basically the average. Let’s call a “yes” answer as a 1, and a “no” answer as a 0. For the control group, you have 10 answers valued at 1, and 990 answers valued at 0. So, the average answer is (10[sup].[/sup]1 + 990[sup].[/sup]0)/(10 + 990) = 0.01. So, E(answer) = 0.01.

Next, you have to get E(answer - E(answer))[sup]2[/sup], or E(answer - 0.01)[sup]2[/sup]. Remember that 10 of the answers have value 1, and 990 have value 0. So, the expected value is (10[sup].[/sup]0.99[sup]2[/sup] + 990[sup].[/sup]0.01[sup]2[/sup])/(10 + 990). I’ll leave it up to you to calculate the actual answer. Take the square root of that, and you’ll have your standard deviation.

So, you now have the average and the standard deviation. So, you compare the difference between the test group (0.015) and the control group (0.01) to the standard deviation, and use that table that should be in the back of your statistics book showing the probability that a given measurement differs from the standard deviation for a binomial distribution.

You can’t use that method on categorical data, because the results depend entirely on what encoding you use for the different categories. If you pick 1/2 for yes and 49/50 for no, you won’t see a significant difference. But if you were to pick 1 for yes and 1000 for no, you will see a significant difference. What makes 0 and 1 the right choice?

The raw numbers that you see will be different, yes, but the important thing is the ratio of the difference between the two averages to the standard deviation of each sample set. Let’s suppose in the two experiments, that are 15 out of 1250 respondants say yes in the first experiment, and that 50 out of 2000 respondants say yes in the second experiment.

So, using 1/2 and 49/50 for no and yes, respectively:

15 yes out of 1000:
E(x) = (49/50 * 15 + 1/2 * 1235)/1250 = .50576
V(x) = ((49/50 - .50576)^2 * 15 + (1/2 - .50576)^2) * 1235)/1250 = .0027316224
std. dev. = .05226492514105420585

50 yes out of 2000:
E(x) = .512

Difference of the expected values = .00624

For the original method (0 for no, 1 for yes) we get:
10 yes out of 1000:
E(x) = (1 * 15 + 0 * 1235)/1250 = .012
V(x) = ((1-.012)^2 * 15 + (0-.012)^2 * 1235)/1250 = .011856
std. dev. = .10888526071052959553

50 yes out of 2000:
E(x) = .025

Difference between the two expected values = .013

Examining the ratios of the differences between the means to the standard deviations, respectively:

.00624 / .05226492514105420585 = .11939173323522981968
.013 / .10888526071052959553 = .11939173323522981967

It’s these ratios that are the important thing. The only reason they’re different is because of rounding. The smaller the ratio is, the more likely that the difference is merely noise rather than a significant difference.

Minor note: the OP states that 25 out of 2000 is 1.5%. It’s actually 1.25%.