on trans fat

Cecil,

as far as your statement that nobody has any clue on why trans fats are not good for you - you surprise me. Please refer to the following couple of books for starters:

Understanding Fats & Oils: Your Guide to Healing With Essential Fatty Acids
by Michael T. Murray, Jade Beutler

Fats That Heal, Fats That Kill: The Complete Guide to Fats, Oils, Cholesterol and Human Health
by Udo Erasmus

these two books have in depth explanation of why trans fats are bad.

Chao

Welcome to the Straight Dope Message Boards, wish, we’re glad to have you with us.

When you start a thread, it’s helpful to other readers if you provide a link to the Column in question. Keeps us all on the same page, and saves lots of searching. In this case: What’s the truth on transfat?

I found the article informative and useful.* However, the best part was Cecil’s phrase “when nobody knew trans fats from transvestites.” I need to see if I can use the insult “You don’t know a trans-fat from a transvestite” in conversation soon. Well, maybe I should wait until after the holidays and family gatherings.

*My sister pointed out that I’m now the same age my dad was when he had his first quadruple coronary bypass. Can’t be. He was old when that happened. Anyway, it was a realization that I do need to do a lot better job of taking care of myself and I’d rather listen to Cecil’s advice than a doctor’s.

Because we all know what a hot-bed of transvestites *that *is bound to be. :wink:

When you do a statistical test there is 1 chance in 20 of getting a false positive, of the test telling you there is a link when there is none. This is built into the test and nothing can be done about it; it is why no statistical test can ever be 100% proof of anything. There is no way to tell a false positive from a real positive.

Doing multiple tests is like rolling dice. If you roll 1 dice and get a 6, the chance of getting a six was 1 in 6. If you keep rolling dice until you get a 6, the chance of getting a six was 100%, not 1 in 6. Similarly, with statistical testing, if you keep doing tests eventually you will produce a false positive.

One of the banes of modern science is the computer, which allows researchers with no understanding of statistics to do statistical tests. It is quite normal for such reseachers to collect their data, then repeatedly break it into categories and do tests on it. When the computer tells them they’ve got a positive result, they think they have found something useful and publish it, not understanding that it is just a false positive. It has been estimated that more than half of all published research is nonsense based on false positives.

The statistically naive believe that if a paper is published suggesting a link that must be solid evidence that there is a link. In fact, any crackpot theory that has prompted research will result in papers showing positive links. Papers finding contradictory links are not uncommon. Nor does the size of a study have any effect on the chance of finding a false positive. It remains 1 in 20 per test done no matter how big the study.

Studies that find negative results are much more interesting, because the chance of a negative result being a false negative is dependent on the size of the study and the strength of the effect. If a big study finds a negative link, that is much stronger evidence that there is no link than any number of studies that find positive links.

The results of a study looking at the links between dietary fat and heart desease, heart attacks, bowel cancer and breast cancer were published in JAMA earlier this year. The study involved following 45000 people for 8 years, and resulted in categories containing thousands of people (most tests are based on categories of a couple of dozen people). It found no link between fat and any of these things. The chances of it being wrong are astronomical. It is about as solid a proof as statistics is ever likely to provide.

There was no attempt to break fat consumption into trans fat and non-trans fat, but one would assume that trans-fat would have made up a reasonable part of the total fat consumed. If it accounted for even as much as 5% of fat consumed, it should have produced a link that would have shown up in the results, given the size of the test. Consequently, one must conclude that trans fat is harmless.

Jim

I don’t think he was saying that we don’t know why they’re bad for us, in so much as he was saying that we are not sure why the chemical composition and the way they are made make them a lot MORE bad for us than unsaturated fats, and of course saturated fats.

You say these books quote the reasons. Personally I don’t know why they are worse for us. Could you give a cited example or explanation. I know it’s a pain to look through big books just to look for simply stated answers, but it would be helpful.

You speak very intelligently, but I am very confused by some of your explanations. I agree that no statistical test can ever be 100%, but where did you come up with this 1 in 20 chance? You make it sound like this is a fact that covers all statistics, which isn’t true. Did you intend to mean that THESE statistics referencing only tests referring to TRANS FAT have a 1 in 20 percent of being flase positive? If so, could you cite a source or example? It’s good to have a healthy and skeptical view of statistics until all the details have been examined, but you have not demonstrated how this applies to your argument.

I believe that this number depends on the number of standard deviations used to determine a correlation. With two standard deviations the correlation is about 95% or 1 in 20. It is fairly common to use two standard deviations, but it is not necessary or universal. It would be a simple process to analyze the data to a higher certainty level. To determine the probability of a false positive one must look at the individual study itself.

Sorry, but I can be hard to understand sometimes.

You collect a set of data. This set is called an event. Each time you collect data you will get a different set of numbers, and so a different event. You put the numbers through some mathematics, which tells you how likely the event was to occur. The value this gives you is called the p-value. So, for example, you might find that the particular set of numbers you got when you conducted your research had a p-value of 0.02, or 2/100, or 2%. A set of numbers like that is so rare it could only happen by chance 2 times in 100.

You name a confidence level. The normal level is 95% (95/100, or 0.95). Which is 100% - 5% (or 5/100 or 1/20 or 0.05). 95% is not particularly good; it is used for historical reasons.

Then you make the following argument. If numbers like these are common, then you say they happened by chance. If not, then you say something must be responsible for them being this way. The cut off point is your confidence level. So your set of data happens 2% of the time, 2% < 5%, so you can say that it probably didn’t happen by chance, ie that something must have caused it to happen, ie that you have found statistical significance at the 95% level.

This is basically what a statistical test is. They can be more complex, but there is always an associated confidence level, and in practice it’s nearly always 95%.

The problem comes in when you do multiple tests. If something happens only 5% of the time, then it’s going to happen in 5% of the tests that you do. So in every test that you do there is a 5% chance that you will get a result that looks significant, but that really has occured by chance. That’s called a false positive. 1 in 20 tests done return a false positive.

Jim

Your argument is true as far as it goes, but misleading in spots.

The confidence interval for any study is a matter of choice, one that is normally decided upon before the study begins but not always. It is not uncommon to see results that are declared to be statistically significant at the .05 level but not at the .01 level, for example. A larger sample size can effect the results because one can safely choose a smaller confidence level.

The major study from which trans fats results have been taken is the Harvard Nurse’s Health Study, which contains 120,000 participants. Other epidemiological studies also have large populations and consequently extremely low confidence levels (the lower the better).

A summary page on the Harvard site demonstrates this:

0.05 would product one false positive in 20. However, 0.0001 would only product one false positive in 10,000.

You can throw out generalities about bad statistics, but that says absolutely nothing about the particular studies used. It’s highly unlikely that all of these are statistical nonsense or that the Harvard School of Public Health somehow hasn’t noticed.

This is not the same as saying that the case is closed and future studies might not give better results. Or that some studies haven’t already given different results. Causation in epidemiological studies is one of the hardest problems to work out: too many conflating factors. However, I don’t find any of your screed to be very convincing. At least give a cite and a link to the particular study you mention so that we can verify your interpretation of it.

In my opinion (not a scientificly justified one.) even a 1 in 20 chance of the study being incorrect is fine because it validates my understanding of the way fat deposits behave. We are all fairly well convinced that saturated fats are worse than cis-unsaturated fats. This is attributed to the way that saturated fats will strtucture themselves. A cis-unsaturated fat necesarily has a kink in the structure that prevents efficient stacking therefore it does not collect as well in deposits. Saturated fats on the otherhand stack very efficiently because there is no structurally enforced kink. The result can easily be observed in the melting points of the pure substances. A cis-unsatrurated fat will invariably be an oil at room temperature while a saturated fat will be a solid. This is why they hydrogenate vegtable oil in the firsty place. The trans-unsaturated fats have an enforced “linearity”. As a result trans-unsaturated fats will stack very efficiently. Once again, the result can be seen in the melting point of the pure substance. (look at the wikipedia site on trans fats it gives some good visual demonstrations.) In addition, trans-unsaturated fats are basically unnatural so the body wont automatically know how to deal with them.
To me, the study only confirms what I would expect. In my opinion, trans-fats are poison and should be treated that way. They should be regulated and eliminated. I don’t know why anyone would support trans-fats unless they worked for an industry that depends on them. You can put them in your body, but I would like to avoid them. I think they may be very bad.

Absoluely true, and studies that use higher confidence levels are the ones that tend to be more reliable. I was speaking generally, however, and in general most studies use 95%.

The study that showed that fats in general are harmless is drawn from the same data.

The important thing to remember about the Harvard Nurses Study is that it is a data dredge. It contains data on thousands of conditions and thousands of factors. The researchers regularly pull data from it and at least once in every twenty tries they find something they can publish. :slight_smile: The negative results it produces are the useful ones, because the size of the study reinforces them so heavily, but they rarely get published.

So you have to ask, how many studies have been conducted and how many tests did they involve? If 40 researchers have done an average of 5 studies each, each involving an average of 50 tests, that’s your 10,000 tests and you should see 1 that is significant at 0.0001. And quite a lot that are significant at lesser levels. How much trawling was done through the data, and how many tests carried out, to get the ones you quoted?

I don’t know whether trans-fats are bad or not. I do know that if they are harmful, and if there were a reasonable percentage of them consumed in the studies published in JAMA, they should have swayed at least one of the four results into significance. The odds against them doing so have a lot more than four zeros after them. :slight_smile:

The studies are:

Beresford et al, Low-Fat Dietary Pattern and Risk of Colorectal Cancer, JAMA 295 (6) p 643

Howard et al, Low-Fat Dietary Pattern and Risk of Cardiovascular Disease, JAMA, 295 (6) p 655

Prentice et al, Low-Fat Dietary Pattern and Risk of Invasive Breast Cancer, JAMA, 295 (6) p 629

1 in 20 chance of a test being incorrect. If the study required 20 tests to be carried out, then it would be expected to produce one false positive. If 20 tests produced one result, that would be the false positive.

Jim

You cannot combine 200 small studies into one large study and claim any better confidence interval for the mass. That would indeed be an example of the statistical chicanery you referred to. However, I know of no set of data that does anything of the sort.

Looked at fat consumption on risk of colorectal cancer.

Looked at weight gain in women.

Looked at fat and the risk of breast cancer.

None of these studies could possibly be invoked to say one way or the other whether trans fats affects LDL levels and the consequent risk of heart attack or stroke. I still don’t understand how they are relevant to your argument.

I don’t know where you got the idea that “Low-Fat Dietary Pattern and Risk of Cardiovascular Disease” is a study on weight gain in women. As the name says, it’s an attempt to link fat consumption to cardio-vascular disease.

The good folk at Harvard decided to prove once and for all that fat was bad for you. They picked out three things fat was supposed to be responsible for, breast cancer, bowel cancer, and cardio-vasular disease, and dug through their data for the numbers. They did five tests, looking for links with bowel cancer, breast cancer, cardio-vasular disease, heart attacks, and the last two taken together, and found no link whatsoever. Those three papers were the result. Given the size of the categories tested, it’s about as conclusive as you could get without real research.

Jim

I thought I took that off the same page of JAMA abstracts as the other two, but I don’t know now how I got that statement. It’s wrong.

However, so are you in your overall conclusions. And in several ways.

First, you misinterpret the overall findings from those particular studies. You can find any number of interpretations of the study. Here are some trenchant comments:

http://docnews.diabetesjournals.org/cgi/content/full/3/4/3

Exactly the opposite of your conclusions, in other words.

Second, you refuse to mention any of the number of studies looking directly at trans fat and its effects. You cannot simply dismiss those.

It may yet turn out to be true that trans fat play a limited role. As I have said and will continue to say, epidemiological studies are hard to control and hard to interpret. However, any statement that these particular studies dismiss trans fat worries are fallacious.

Sorry, wrong data dredge. I should have said the Women’s Health Initiative. :frowning:

Isn’t post-hoc reasoning wonderful?

“Together, from a statistical perspective, these differences between the original design and reality led to a problem of having only a 40% chance of finding a true difference (if one existed) in CHD incidence between the dietary intervention group and the comparison group.” The odds of a false negative depend on the size of the categories and the strength of the supposed increase. Given the size of the categories, the odds of a false negative are astronomical. Adjusting by 40% still leaves you with odds that are astronomical.

“Second, the CHD incidence rate in the comparison group was only about two-thirds of what was expected in the original design.” According to the American Heart Association, the rates per “100,000 people were […] 125.1 for white females and 160.3 for black females (preliminary)” in 2003. The rate per 100,000 in this survey was (according to me) about 208. That’s not 2/3. Even comparing against the rate for black males (241.1, the highest) doesn’t get to 2/3. You need a rate of around 311. So that 40%'s been inflated.

What does it say when a bunch of researchers spend 8 years trying to prove a point, and then when they don’t get the result they want attack their own research?

“Second, you refuse to mention any of the number of studies looking directly at trans fat and its effects. You cannot simply dismiss those.”

I’m not dismissing them, I’m saying that they are little pinpricks whereas the WHI study is a whopping great hole. Statistical testing is all about probability. It would take a lot of positive results to counterbalance the negative one offered by the WHI.

Jim

I noted that you said "It’s not about Big Brother, you dopes. (“Will they ban sugar and salt next?” Sheesh.) We’re talking about an industrial product used in food preparation because it’s cheap and convenient, not because it makes anything taste better. " And isn’t that the point! The argument is not will “The Government” take away my right to eat trans fat. The argument is, Does industry/business interests have the right to feed you a none food item that is unhealthy for you. Does business have the right to feed you anything they want? Don’t you have a right to be protected from industry putting none food items into your meal? Can business put plastic in your meal and that’s OK? Can pharmaceutical companies put arsenic in your medicine like they did with patent medicines before Teddy Roosevelt stated the FDA to protect people against such doings? Of course not! So why should food companies be able put additives in your food that are at the very least unhealthy and at the most lethal??

One big disappointment with the article: the main question is whether trans-saturated fats are worse than cis-saturated fats. I agree with Cecil that the data are pretty convincing that trans-saturated fat consumption increases the risk of cardiovascular disease. But so does cis-saturated fat consumption.

The reason this matters is the motivation for the original article: we’re in the midst of another food scare. Trans-saturated fats have become a boogeyman and food manufacturers are big show of eliminating them. But what are they replacing them with? Fried food manufacturers can do with low-saturation oils (KFC for example). But many foods require fats that remain solid at room temperature (Oreos for example), and in these cases manufacturers are using things like palm oil that are just as saturated and probably just as bad.

This is becoming another example of a depressing trend of demonizing one food category as a quick fix for all our eating woes, and as before the solution may be more dangerous than the problem (wow, Oreos are trans-fat-free! Now I can eat a whole bag!).

I am alarmed and confused by this assertion:

Why would measure per serving rather than measure a larger amount and divide by the servings?

For instance:

if something is 0.4 grams per serving and you have 1,000 servings worth of it, it should be rather easy to find that that amount of content has 400g of trans fat.

At least you’d be able to see that 1,000 servings of it doesn’t have 0.0 grams of trans fat, as a container of 12 servings that are “hard to measure” would be allowed to assert.

I don’t understand what the rationale behind what a serving is, and it is clearly a wretched standard if it essentially allows junk manufacturers the ability to hide in mathematical obscurity.

Why not, then, list any product at a serving amount that allows someone to list 0 trans fats? Does this small container of dip really have 12 servings? Is a bag of chips really 7 servings? Lude.