Second Hand Smoke: In Defense of the 90 Percent Confidence Interval

Cecil has covered second hand smoke twice, and he took issue with the fact that many studies relied on a 90 percent confidence interval.

Original Column on Second Hand Smoke

Followup

These columns are from all the way back in 2000, so maybe it’s unfair for me to criticize what Cecil said that long ago. However, to my knowledge, he has not changed his stance on it so here I am.

I will be quoting Merchants of Doubt by Naomi Oreskes and Erik Conway in this thread. I will be quoting the chapter “What is Bad Science? Who Decides?” The chapter is about skepticism surrounding second hand smoke.

The text says, “The fear of [being wrong] asks us to play dumb. That makes sense when we really don’t know what’s going on in the world—as in the early stages of a scientific investigation.” But this was not the beginning of an investigation. There was already decades worth of data telling us that first hand smoke caused cancer. It was hardly necessary to tread lightly when dealing with whether inhaling a known carcinogen had carcinogenic effects. Unless tobacco smoke has a threshold effect (no effect at lower exposures) then common sense tells us that passive smoking will have similar effects to first hand smoking. Common sense is often wrong, but we go with common sense until something comes along to tell us otherwise.

The text says, “What if we already have strong, independent evidence to support a cause-and-effect relationship? Let’s say you know how a particular chemical is harmful, for example, that it had been shown to interfere with the cell function of laboratory mice. Then you might argue that is is reasonable to accept a lower statistical threshold when examining effects in people, because you already have good reason to believe that the observed effect is not just chance.”

A 90 percent CI means that there is a 9/10 chance that the results are not by chance. At the end of his follow up article, Cecil declares that the health risk of passive smoking "hasn’t been proven yet. " I don’t mean to be semantic, but it’s very hard to “prove” something like this. Lung cancer does occur naturally sometimes, so we can never be absolutely sure what caused someone’s cancer. You can’t prove these sorts of things the same way you can prove that 1 + 1 = 2.

It’s important to remember that a majority of these studies that had a 90 percent CI showed that passive smoking can be harmful. So there’s only a 1/10 chance for any given study to be a false positive and the majority of them said something was happening.

“Consistency—not any arbitrary significance level—is the real gold standard of scientific evidence,” says the authors of Merchants of Doubt. And these studies had shown consistently that something was up.

The mere fact that there is only a “small increase”, in Cecil’s words, of risk does not mean the risk is not there. If there was no evidence that tobacco smoke caused cancer, that would be one thing. But there is evidence. And it doesn’t take the apparently magical 95 percent CI to assess whether a known toxin produces effects in smaller doses. And that’s all second hand smoke is, really: a bystander smoking less than you. It doesn’t become an entirely new substance when it is second hand.

So no, Cecil, having a 90 percent CI is not “fudging the numbers”. If an effect is real, it is going to consistently show up more often than not, whether the CI is 95 percent or 51 percent.

A p-value between <.10 and >.05 indicates weak confidence, while a p-value >.1 indicates no confidence in the relationship. It wasn’t really wrong what Cecil said, based on the information available at the time.

Also, I don’t think your understanding of a confidence interval is correct.

Was it a p-value or a confidence interval that they were looking at? Cecil made it sound like it was a CI, although if so he should really report what the range is…

A 90% confidence interval means, roughly, that assuming the data points are normally distributed (many large data sets are, to a good approximation, but there are exceptions), the true value you are measuring (presumably the increased risk of whatever disease) has a 90% chance of lying within the 90% confidence interval. What this means is that a 90% confidence interval will necessarily be smaller than a 95% confidence interval. A confidence interval is inherently talking about the range of values that are likely, given a value that you measure. In his second post, he did talk about studies of cardiovascular disease using the 95% CI

A p value, on the other hand, is to test a binary statement: Is this hypothesis true or not. Many people think that a p value of 0.05, for example, means that the hypothesis has a 95% chance of being true, but this is quite incorrect. A p-value of 0.05 means that, if the hypothesis was false (say, there is no correlation between second hand smoke and cancer), then you would still have a 5% chance of seeing a correlation at least as strong as you saw, just by chance.