Quick remark in the "Broadcast Power" article

In the classic article on Broadcast Power, Uncle Cecil at one point remarks:

“Scientists nowadays worry about the possible injurious effects of the electric fields around wires”

This is true, but somewhat misleading. The evidence for electromagnetic fields contributing to the incidence of any disease (including cancer) is inconclusive at best and non-existent at worst.

To be meaningful, an epidemiological study has to show a “strong correlation” between a suspected risk factor and a particular disease. A “strong correlation” is one in which the people in a randomly-selected group who were exposed to the suspected risk have at least twice the incidence of the disease as those from the same group who were not exposed to the suspected risk. A correlation weaker than this 2:1 minimum should not be viewed as significant, as weak correlations can and do arise due merely to chance. To date, as far as I know, no study has shown a strong correlation between electromagnetic fields and any form of cancer.

Sadly, many other things thought (or once thought) to be hazardous fall into this “sketchy-at-best evidence for risk” category – including dioxin, alar, and that modern dreaded demon, second-hand smoke.

This remains a controversial subject. I agree that effects from low-level exposure to electromagnetic fields have not been proven, and am skeptical that they ever will be. But I was skeptical about Thomas Jefferson having had a kid with Sally Hemings, too. So let’s just say the final word on this subject has yet to be written.
[Note: This message has been edited by Cecil Adams

On a more general note, I think your “twice the incidence” condition is nonsense. What is your exact calculation here? Simple counterexample: If 10% of people normally die of cancer (I’m guessing here; accept it for the sake of the argument) and a certain agent increases that rate to 15% in the population of a medium-sized town, that is certainly significant even though the incidence is not doubled.

Significance is usually couched in probabilistic terms. If an observed result would have occurred by chance with a probability below a certain threshold (often 5%), then it is considered significant. Among other things, this means that the required increase in the incidence of a disease depends of the size of the sample (i.e. the population). If it is large enough, an increase of 10%, 1%, or even less can be highly significant.

By that standard, by the way, dioxin and second-hand smoke have both been proven to be dangerous, as far as I know.

Holger

TheIncredibleHolg writes:

Yes, but sample size, risk factors, and probability are not independent of each other. Hypothetical examples: if the sample size is three, a huge risk factor for or against the allegedly causative agent may be imputed, although at a very uncertain probability level. Likewise, if the computed risk factor is very low, the probability must again be very uncertain.
Moreover, both laboratory experiments and epidemiological studies tend to be short-term, compared to real-world exposures (there are obvious reason that this must be so).
In the hypothetical example that you offer, the raw sample size would be in the thousands, if not tens of thousands, and the increase in risk would undoubtedly be considered significant. OTOH, such an epidemiological study would require 50 or 60 years to complete. There would also have to be compensations for “confounding factors” (what about the three-pack-a-day smokers? what about those who were only exposed to the supposed carcinogen from the ages of twelve to eighteen?), and the sample would immediately dissolve into a number of samples, possibly with too small a size to be considered significant, or requiring a much higher level of effect (i.e., risk factor) to be significant. “Meta-analysis” can further cloud the picture, since the individual studies concerned may not have been controlled in the same way.
The model and dose rate used in laboratory can finally be confusing. Using a linear, non-threshhold model, and dose rates comparable to those usually considered necessary to compensate for the required, but unachievable, long-term exposure, we would conclude that, e.g., retinol and selenium are both highly toxic and should be avoided.
Finally, of course, negatives cannot be proven. It can justly be said, “There is no evidence that X causes Y”. Others, invoking the Precautionary Principle, can say, “But this (lack of) evidence doesn’t prove that a little more of X, or a little longer exposure to it, or its synergistic effects in conjunction with Z, won’t cause Y”. The benefits of any are (relatively) easily measurable; the costs, particularly in light of the Precautionary Principle, are not.
Given these, there is no evidence that either EMF or dioxin, in any doses short of immediately lethal, cause cancer. The evidence on STS is less clear-cut, but seems not to be of a sort that would allow us to declare it a proven carcinogen.


“Kings die, and leave their crowns to their sons. Shmuel HaKatan took all the treasures in the world, and went away.”

Thanks, Akatsukami, for that elaborate discussion of the pitfalls of scientific studies. I think we agree that tracer’s “twice the incidence” rule is BS.

I’d love to see some definitive studies on dioxin and second-hand smoke, but I don’t have any sources right now, and no time to go looking for them. No luck in Cecil’s archives, either. But I do seem to remember that the latter was conclusively shown to be harmful. But maybe it’s just that I can’t believe the smoker keeps all the carcinogens in his/her own lungs. (NRN; I know there are other factors.) Also, cancer is not the only disease in the world, although one of the worst.

Holger

Not strictly germane to the topic, but this article: http://www.newscientist.com/ns/19990710/thepowerof.html discusses how a statistical principle can indicate whether a data set, as used in an epidemiological study, is “suspicious”.

“Kings die, and leave their crowns to their sons. Shmuel HaKatan took all the treasures in the world, and went away.”

The “twice the incidence” rule is called a “strong association” in various statistical arenas. An association greater than 1:1 but less than 2:1 is called a “weak association”.

The strong-association cutoff of 2:1 was determined over many decades of performing studies and applying more basic statistical methods. It weeds out the vast majority of associations that arise due to minor selection errors and associations that arise due to dumb luck. This is especially important in studies with samples smaller than several thousand cases, or where risk exposure is determined by vauge questions (“Were you ever exposed to second-hand smoke at some time in your life? Please rate your level of exposure on a scale of 1 to 5.”)

As an example, TheIncredibleHolg wrote:

But how can the experimenters be certain that “a certain agent” increased the cancer rate to 15%, and not that the town just happened to have a higher cancer rate independent of the agent? Such “disease clusters” are relatively common. Just because Agent X is present does not mean Agent X caused the increased cancer rate. Requiring a strong correlation of at least 2:1 is ONE way that such statistical flukes can be eliminated.

I thought that was just what I discussed; the probability of such a cluster occurring by chance is calculated to determine whether the observation is significant. And as Akatsukami elaborated, these calculations are A LOT more complex than whether there is a 2:1 ratio. In any case, all you get is a probabilistic statement because ANY event MIGHT occur just by chance, but some are simply unlikely.

I do realize that my phrasing was inaccurate; make that “if the rate increases to 15% in the presence of a certain agent”. My point should be clear, though: with your 2:1 rule, you can easily miss a lot of significant correlations while still allowing insignificant ones to pass. It’s just too simple.

I’ve never seen such a rule applied in any serious study. The only thing I can imagine is that in a particular study with a particular set of parameters, a ratio of 2:1 would have been employed as a threshold of presumed significance. But we can discuss your sources anytime.

Holger

bump - cecil post

… and now, here we are, 2 years later, and I can finally come clean about where I got the notion that a 2:1 association is supposed to act as a magical cut-off point in any statistical investigation.

Namely:

I got it from a sarcastic, highly biased on-line book titled Science Without Sense. I, therefore, speak with the authority of someone who’s read something somewhere about epidemiology, so you should all bow down and worship my opinions.