When you do a statistical test there is 1 chance in 20 of getting a false positive, of the test telling you there is a link when there is none. This is built into the test and nothing can be done about it; it is why no statistical test can ever be 100% proof of anything. There is no way to tell a false positive from a real positive.
Doing multiple tests is like rolling dice. If you roll 1 dice and get a 6, the chance of getting a six was 1 in 6. If you keep rolling dice until you get a 6, the chance of getting a six was 100%, not 1 in 6. Similarly, with statistical testing, if you keep doing tests eventually you will produce a false positive.
One of the banes of modern science is the computer, which allows researchers with no understanding of statistics to do statistical tests. It is quite normal for such reseachers to collect their data, then repeatedly break it into categories and do tests on it. When the computer tells them they’ve got a positive result, they think they have found something useful and publish it, not understanding that it is just a false positive. It has been estimated that more than half of all published research is nonsense based on false positives.
The statistically naive believe that if a paper is published suggesting a link that must be solid evidence that there is a link. In fact, any crackpot theory that has prompted research will result in papers showing positive links. Papers finding contradictory links are not uncommon. Nor does the size of a study have any effect on the chance of finding a false positive. It remains 1 in 20 per test done no matter how big the study.
Studies that find negative results are much more interesting, because the chance of a negative result being a false negative is dependent on the size of the study and the strength of the effect. If a big study finds a negative link, that is much stronger evidence that there is no link than any number of studies that find positive links.
The results of a study looking at the links between dietary fat and heart desease, heart attacks, bowel cancer and breast cancer were published in JAMA earlier this year. The study involved following 45000 people for 8 years, and resulted in categories containing thousands of people (most tests are based on categories of a couple of dozen people). It found no link between fat and any of these things. The chances of it being wrong are astronomical. It is about as solid a proof as statistics is ever likely to provide.
There was no attempt to break fat consumption into trans fat and non-trans fat, but one would assume that trans-fat would have made up a reasonable part of the total fat consumed. If it accounted for even as much as 5% of fat consumed, it should have produced a link that would have shown up in the results, given the size of the test. Consequently, one must conclude that trans fat is harmless.
Jim