stat. significance test possible?

I just switched to a new ADD medication and I was looking at the info sheet that comes with it. Table 3 compares percentage of patients reporting side effects for the treatment group vs. the placebo group. The number of patients are given: Strattera n=269, placebo n=263. So, let’s say, for nausea, 12% of the treatment group reported it, whereas 5% of the placebo group reported it.

I can’t recall. Is that enough information to do a significance test? If so, refresh my memory on how to do it.


No. You also need the standard deviation, or some other statistic such as the p value, e.g. p<.05

Percentages, or averages, are meaningless without know how messy (variable) the data is.

“Is that enough information to do a significance test?”

Yes. For nausea, the relative risk for people taking the active drug is 2.41 (the Taylor series 95% confidence interval is 1.29 - 4.48) compared with those taking the placebo. The Yates corrected Chi square is 7.43 and the p value is 0.0064.

I think that there might be enough info for a non-parametric test. Maybe someone more experienced will come along while I look it up.

If you can do a significance test, it would be on the reporting of nausea, not on the effect itself, which is most likely what you are really interested in.

Unless you have other information, this is nonsense posted by a troll.

FranticMad: Not quite. If the variable of interest is normally distributed, then yes, you need both the mean and standard deviation to describe it, and to do any meaningful stats on it. If the variable is Poisson (mean=Std. Dev) or binomial (as in this case, where mean=np and stdev=something I’ve forgotten, but can be determined from n and p), then knowing the mean and sample size tells you enough about the distribution to test for differences. 'Course, I’m not sure off the top of my head what that test would be, but Yeah sounds credible to me… Because you do have more information: you know that the variable is binomially distributed.

I’m pretty sure viking’s correct. In this case, the outcome is binomially distributed. The difference in proportions is asymptotically normal. The test would be a test for the difference of two binomial proportions. The form of the test statistic reduces to (sorry for the lack of coding):
Z=(p1 - p2)/sqrt[((p1q1)/n1) + ((p2q2)/n2)]
where p1 is the proportion of the treatment group that reported nausea, q1 = 1 - p1, and n1 is the size of the treament group. There’s an analogue for the placebo group (p2, q2, n2).
and with the numbers given:
Z=(0.12 - 0.05)/0.023941 = -1.96846
Since the critical values for a 5% significance level test are +/- 1.96, the difference is just BARELY significant at that level. We would reject the null hypothesis that there is no difference in the frequency of occurrence of nausea between treatment and placebo groups. Conclusion: There is a statistically significant difference in the frequency of occurrence of nausea between treatment and placebo groups.
The estimated variance of a binomially distributed random variable is npq, for those who were wondering.

Sure, IF you know the distribution. But the information given does not include any such thing. No matter what kind of distribution (normal, Poisson, etc.) exists theoretically, any one sample is just that – a sample. The parameters of the data may be skewed, leptokurtotic, or otherwise anomalous based on sampling errors and methodological flaws.

If Yeah has accurate numbers (to two decimal places), where did they come from?

It may or not be correct. But son, I’ll make the judgements as to who’s trolling around here.

In a biostatistical analysis, the method that I posted above would be the method of comparison. The distribution of the data can be inferred by the description of the question: either nausea does or does not occur. It is the outcome of n Bernoulli trials. The test statistic converges stochastically to N(0,1) and that is all we need to know.

If you don’t agree with me, how would you propose answering the question? This is a typical homework question in any biostats text.

Bonehead question: It looks like I can just plug in the numbers for the list of side effects, one at a time, into the Z formula and test it against +/-1.96. Am I correct?

Thanks, everybody!

js, you COULD just do what the authors of the study you quote did. They (Michelson et al. Atomoxetine in Adults with ADHD: Two Randomized Placebo-Controlled Studies. Biol Psychiatry 2003;53:112-120) used Fisher’s Exact Test and calculated a p of 0.03.

(Of the 263 subjects taking placebo, 13 [4.9%] reported nausea. Of the 269 taking the active drug, 33 [12.3%] reported nausea.)

Michelson et al. may have gotten it wrong, but their analysis was good enough for the editors and reviewers at Biological Psychiatry.

Fisher’s exact is a neato nonparametric analogue to the test I gave. It doesn’t make any assumptions about normality, but (IIRC) computes a bunch of contingency table probabilities. It wouldn’t by any means be a wrong way to look at it.

On a side note js, the people in my department that do psychiatric biostats are extremely excited about atomoxetine and its potential for treatment of ADD, especially in people with tendencies toward addictions. I don’t know if that’s the case for you, but I hope you do well with it.

Yes, but the more times you do it the more likely you are to get a false result. That is why pollsters say a result is accurate within certain limits “nineteen times out of twenty”.

My apologies for jumping to a conclusion yeah – I didn’t realize that you were quoting numbers from the study. When someone quotes numbers I am eager to see where they came from.

One benefit of all this is that it took me back to my text “Statistical Methods for Research Workers” by Fisher, 14th Edition 1970. If I have any remaining crankiness, it is about the ease with which clinical research applies statistical methods without good experimental designs, or examining sampling errors.

Even a distribution based on Bernoulli trials assumes that the events are truly random, or independent. Until I read the study itself, I won’t know if they included intent-to-treat patients in their numbers, which can greatly distort the distribution, and therefore the p-values.

My experience in reading clinical trials is that they do not test side effects with the same standardized tests that they apply to main effects. I am compulsively skeptical, I suppose.

Yeah, thanks for the cite.

Thanks, me too! :slight_smile: