Straight Dope Message Board > Main stat. significance test possible?
 Register FAQ Calendar Mark Forums Read

#1
02-15-2003, 08:07 PM
 js_africanus Guest Join Date: Nov 2002
stat. significance test possible?

I just switched to a new ADD medication and I was looking at the info sheet that comes with it. Table 3 compares percentage of patients reporting side effects for the treatment group vs. the placebo group. The number of patients are given: Strattera n=269, placebo n=263. So, let's say, for nausea, 12% of the treatment group reported it, whereas 5% of the placebo group reported it.

I can't recall. Is that enough information to do a significance test? If so, refresh my memory on how to do it.

Thanks.
#2
02-15-2003, 08:14 PM
 FranticMad Guest Join Date: Jan 2003
No. You also need the standard deviation, or some other statistic such as the p value, e.g. p<.05

Percentages, or averages, are meaningless without know how messy (variable) the data is.
#3
02-15-2003, 08:32 PM
 Yeah Guest Join Date: Jan 2000
"Is that enough information to do a significance test?"

Yes. For nausea, the relative risk for people taking the active drug is 2.41 (the Taylor series 95% confidence interval is 1.29 - 4.48) compared with those taking the placebo. The Yates corrected Chi square is 7.43 and the p value is 0.0064.
#4
02-15-2003, 08:34 PM
 dlknox Guest Join Date: Apr 2002
I think that there might be enough info for a non-parametric test. Maybe someone more experienced will come along while I look it up.
#5
02-15-2003, 11:27 PM
 aahala Guest Join Date: Mar 2002
If you can do a significance test, it would be on the reporting of nausea, not on the effect itself, which is most likely what you are really interested in.
#6
02-16-2003, 01:03 AM
 FranticMad Guest Join Date: Jan 2003
Quote:
 Originally posted by Yeah "Is that enough information to do a significance test?" Yes. For nausea, the relative risk for people taking the active drug is 2.41 (the Taylor series 95% confidence interval is 1.29 - 4.48) compared with those taking the placebo. The Yates corrected Chi square is 7.43 and the p value is 0.0064.
Unless you have other information, this is nonsense posted by a troll.
#7
02-16-2003, 02:50 AM
 viking Guest Join Date: Dec 2002
FranticMad: Not quite. If the variable of interest is normally distributed, then yes, you need both the mean and standard deviation to describe it, and to do any meaningful stats on it. If the variable is Poisson (mean=Std. Dev) or binomial (as in this case, where mean=np and stdev=something I've forgotten, but can be determined from n and p), then knowing the mean and sample size tells you enough about the distribution to test for differences. 'Course, I'm not sure off the top of my head what that test would be, but Yeah sounds credible to me... Because you do have more information: you know that the variable is binomially distributed.
#8
02-16-2003, 09:52 AM
 Becky Guest Join Date: Jul 2000
I'm pretty sure viking's correct. In this case, the outcome is binomially distributed. The difference in proportions is asymptotically normal. The test would be a test for the difference of two binomial proportions. The form of the test statistic reduces to (sorry for the lack of coding):
Z=(p1 - p2)/sqrt[((p1*q1)/n1) + ((p2*q2)/n2)]
where p1 is the proportion of the treatment group that reported nausea, q1 = 1 - p1, and n1 is the size of the treament group. There's an analogue for the placebo group (p2, q2, n2).
and with the numbers given:
Z=(0.12 - 0.05)/0.023941 = -1.96846
Since the critical values for a 5% significance level test are +/- 1.96, the difference is just BARELY significant at that level. We would reject the null hypothesis that there is no difference in the frequency of occurrence of nausea between treatment and placebo groups. Conclusion: There is a statistically significant difference in the frequency of occurrence of nausea between treatment and placebo groups.
The estimated variance of a binomially distributed random variable is npq, for those who were wondering.
#9
02-16-2003, 10:23 AM
 FranticMad Guest Join Date: Jan 2003
Sure, IF you know the distribution. But the information given does not include any such thing. No matter what kind of distribution (normal, Poisson, etc.) exists theoretically, any one sample is just that -- a sample. The parameters of the data may be skewed, leptokurtotic, or otherwise anomalous based on sampling errors and methodological flaws.

If Yeah has accurate numbers (to two decimal places), where did they come from?
#10
02-16-2003, 11:27 AM
 manhattan Charter Member Charter Member Join Date: Aug 1999 Posts: 9,127
Quote:
 Originally posted by FranticMad Unless you have other information, this is nonsense posted by a troll.
It may or not be correct. But son, I'll make the judgements as to who's trolling around here.
__________________
"We hope that next time the rockets will be more accurate and effective in getting rid of this virus." Walid Jumblatt on Paul Wolfowitz, October 2003

"This process of change has started because of the American invasion of Iraq... The Syrian people, the Egyptian people, all say that something is changing." Walid Jumblatt, February 2005
#11
02-16-2003, 01:48 PM
 Becky Guest Join Date: Jul 2000
In a biostatistical analysis, the method that I posted above would be the method of comparison. The distribution of the data can be inferred by the description of the question: either nausea does or does not occur. It is the outcome of n Bernoulli trials. The test statistic converges stochastically to N(0,1) and that is all we need to know.

If you don't agree with me, how would you propose answering the question? This is a typical homework question in any biostats text.
#12
02-16-2003, 06:19 PM
 js_africanus Guest Join Date: Nov 2002
Quote:
 Originally posted by Becky I'm pretty sure viking's correct. ... The estimated variance of a binomially distributed random variable is npq, for those who were wondering.
Bonehead question: It looks like I can just plug in the numbers for the list of side effects, one at a time, into the Z formula and test it against +/-1.96. Am I correct?

Thanks, everybody!
#13
02-16-2003, 06:34 PM
 Yeah Guest Join Date: Jan 2000
js, you COULD just do what the authors of the study you quote did. They (Michelson et al. Atomoxetine in Adults with ADHD: Two Randomized Placebo-Controlled Studies. Biol Psychiatry 2003;53:112-120) used Fisher's Exact Test and calculated a p of 0.03.

(Of the 263 subjects taking placebo, 13 [4.9%] reported nausea. Of the 269 taking the active drug, 33 [12.3%] reported nausea.)

Michelson et al. may have gotten it wrong, but their analysis was good enough for the editors and reviewers at Biological Psychiatry.
#14
02-16-2003, 07:25 PM
 Becky Guest Join Date: Jul 2000
Fisher's exact is a neato nonparametric analogue to the test I gave. It doesn't make any assumptions about normality, but (IIRC) computes a bunch of contingency table probabilities. It wouldn't by any means be a wrong way to look at it.

On a side note js, the people in my department that do psychiatric biostats are extremely excited about atomoxetine and its potential for treatment of ADD, especially in people with tendencies toward addictions. I don't know if that's the case for you, but I hope you do well with it.
#15
02-16-2003, 10:55 PM
 FranticMad Guest Join Date: Jan 2003
Quote:
 Originally posted by js_africanus Bonehead question: It looks like I can just plug in the numbers for the list of side effects, one at a time, into the Z formula and test it against +/-1.96. Am I correct? Thanks, everybody!
Yes, but the more times you do it the more likely you are to get a false result. That is why pollsters say a result is accurate within certain limits "nineteen times out of twenty".
#16
02-16-2003, 11:05 PM
 FranticMad Guest Join Date: Jan 2003
My apologies for jumping to a conclusion yeah -- I didn't realize that you were quoting numbers from the study. When someone quotes numbers I am eager to see where they came from.

One benefit of all this is that it took me back to my text "Statistical Methods for Research Workers" by Fisher, 14th Edition 1970. If I have any remaining crankiness, it is about the ease with which clinical research applies statistical methods without good experimental designs, or examining sampling errors.

Even a distribution based on Bernoulli trials assumes that the events are truly random, or independent. Until I read the study itself, I won't know if they included intent-to-treat patients in their numbers, which can greatly distort the distribution, and therefore the p-values.

My experience in reading clinical trials is that they do not test side effects with the same standardized tests that they apply to main effects. I am compulsively skeptical, I suppose.
#17
02-16-2003, 11:51 PM
 js_africanus Guest Join Date: Nov 2002
Yeah, thanks for the cite.

Quote:
 Originally posted by Becky On a side note js, the people in my department that do psychiatric biostats are extremely excited about atomoxetine and its potential for treatment of ADD, especially in people with tendencies toward addictions. I don't know if that's the case for you, but I hope you do well with it.
Thanks, me too!

 Bookmarks

 Thread Tools Display Modes Linear Mode

 Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is Off HTML code is Off
 Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home Main     About This Message Board     Comments on Cecil's Columns/Staff Reports     Straight Dope Chicago     General Questions     Great Debates     Elections     Cafe Society     The Game Room     In My Humble Opinion (IMHO)     Mundane Pointless Stuff I Must Share (MPSIMS)     Marketplace     The BBQ Pit Side Conversations     The Barn House

All times are GMT -5. The time now is 11:41 AM.