Wednesday, 30 October 2013

Good Week for Saturated Fat!!

Saturated fat has had some good press the last couple weeks. 

Dr. Perlmutter was on the Dr. Oz show extolling the benefits of butter. 

BMJ published a review article by British cardiologist Asseem Malholta in which he states that saturated fat and cholesterol are not the major issue causing heart disease. (1)

Catalyst, as news show on the ABC network in Australia aired part 1 of a 2 part series with many doctors and researchers who believe that saturated fat and cholesterol do not cause heart disease. (Part 2 will be on next week) It included a quote from George Mann, a retired professor of biochemistry and medicine at Vanderbilt University who worked on the Framingham heart study:

"One of the Framingham researchers became so dismayed with the results, he wrote a scathing review of the whole diet-heart hypothesis, saying that people had been misled 'by the greatest scientific deception of our times, the notion that animal fat causes heart disease'."

Recent Research


A 2010 meta-analysis published in the American Journal of Clinical Nutrition Concluded that:(2)

" A meta-analysis of prospective epidemiologic studies showed that there is no significant evidence for concluding that dietary saturated fat is associated with an increased risk of CHD or CVD."

Earlier this year a group examined data from The Sydney Diet Heart Study which was done in the 1970's.  In this study the experimental group decreased the saturated fat in the diet and increased the amount of omega 6 polyunsaturated fat. (polyonsaturated fats are the ones the various health agencies claim are heart healthy) 

 When originally published in in the 70's it did not include any data on Cardiovascular deaths.  I find this rather odd since it was called the Sydney Diet HEART Study.  Since the CVD deaths data was omitted I have seen this study used to support the theory that saturated fat causes heart disease since cholesterol was lowered in the low saturated fat group.

When the data was recovered and analyzed it showed that the group eating more saturated fat had lower  rate of CVD deaths than the group eating more polyunsaturated fats which led to the conclusion:(3)

" Advice to substitute polyunsaturated fats for saturated fats is a key component of worldwide dietary guidelines for coronary heart disease risk reduction. However, clinical benefits of the most abundant polyunsaturated fatty acid, omega 6 linoleic acid, have not been established. In this cohort, substituting dietary linoleic acid in place of saturated fats increased the rates of death from all causes, coronary heart disease, and cardiovascular disease. An updated meta-analysis of linoleic acid intervention trials showed no evidence of cardiovascular benefit. These findings could have important implications for worldwide dietary advice to substitute omega 6 linoleic acid, or polyunsaturated fats in general, for saturated fats."

 If all this is true why did all the health organizations start giving our dietary recommendations to cut saturated fat in the first place?

 Some bad science.

In the 1950's a researcher named Ancel Keys presented an observational study comparing the percentage of fat in the diet to death from heart disease in 6 countries shown below.  Thanks to Peter from Hyperlipid for the graphs.

As you can see Keys graph showed a correlation between fat intake and heart disease, it is almost a perfectly straight line.  There were a few problems with his study.  First it was an observational study so although it can show correlations it cannot provide any information on cause and effect.  I discussed the limitations of observational studies in a previous post.

Keys used 6 countries (Japan, Italy, England and Wales, Australia, Canada and the USA) in his study but there was data for 22 countries available when he did his study.  Using 6 other countries  shows a negative correlation between fat in the diet and deaths from heart disease.

  When all 22 countries are plotted on a graph they are all over the place and there isn't much of a correlation.  There are countries like the Netherlands that eat a lot of fat and a low rate of heart disease deaths.  In Finland they eat less fat than the Netherlands yet they have rate of heart disease deaths over 3 times as high.  The red dots below are data added from some hunter gatherer populations that eat a high fat diet and have very low rates of heart disease ( Masai, Inuit, Tokelau and a few others)


Politics in the American Heart Association

Keys study was initially met with skepticism.  The American Heart Association (AHA) did not immediately accept the diet heart hypothesis.  They acknowledged the correlation in the study but that did not mean there was any proof that reducing fat in the diet would reduce heart disease deaths.  Clinical trials needed to be done that showed a reduction in dietary fat would lead to lower heart disease deaths before they could give a diet recommendation to the American people.  By 1961 the AHA had changed its position and was now recommending a low fat diet.  Does this mean that the clinical trials had been done that proved a high fat diet caused heart disease?  No, those trials had not been done.  What had changed was that now Keys and a few other like minded people were on the AHA committee that made the decision to support Keys research.

The government will makes things better, right??

In 1968 a Senate Select Committee on Nutrition and Human Needs was formed headed by George McGovern.  Originally its mandate was to eliminate malnutrition but in the 70's they started looking into the link between diet and chronic disease.  At the time there was a controversy in the scientific community as to whether lowering the amount of fat in the diet would lead to improvements in health.  The committee  sided with Keys and the AHA despite protests from other researchers.  When the dissenting researchers told McGovern that there wasn't enough evidence to make these recommendation McGovern replied,

 “we Senators don’t have the luxury that a research scientist does of waiting until every last shred of evidence is in.”  

The following clip is from Tom Naughton's documentary "Fathead":

They were so sure that the theory that saturated fat causes heart disease was right they decided to release the guideline without proof they were correct.  They assumed that once the research was done it would validate the theory. ( You know what they say about assumptions )  For the last 40 years they have done study after study spending billions of dollars but have yet to prove that the theory is correct.  During that time people have been following that advice and we have become fatter and sicker.  Obesity and diabetes rates are skyrocketing.  I am hoping that the recent will convince people that the natural saturated fat we have been eating for thousands, if not millions of years is healthy and that the man made vegetable oils, sugars and refined grains that we have only been eating for about 100 years are the real problems in our diets. 

Wednesday, 2 October 2013

Re Post: why nutrition science is so bad

I am posting this article again to put it on the front page as I will be talking to Phil on 1150 am about it today.  

When I first started looking into nutritional science, I was shocked at how badly it was done.    To be fair, nutrition science if very difficult.  There are so many lifestyle and nutrition choices that can have a positive or negative influence on health, that it is hard to isolate the effect of any one food or group of foods.  There are two main types of studies; Observational (Epidemiological) studies and Controlled Experiments.  Tom Naughton has a really good video called “Science for smart People”  that explains the difference in an informative and humorous way.

The gold standard of science is the controlled experiment.  In a controlled experiment, you hold all variables constant except for the one variable that you are studying.  For example, if you wanted to study the effect of a fertilizer on growing plants, you would plant two groups of the same seeds.  You would use the same soil, same size pot; put the pots in the same area where the temperature and light exposure are the same, and give them the same amount of water.  You would put fertilizer in one but not the other.  If the plants in the fertilized pot grow faster and larger, then this would support your hypothesis that fertilizer helps plants grow.  

It is very hard to do this in a nutritional study.  The only way to be sure of the quality and quantity of food consumed, would be to lock people in a metabolic ward where food could be controlled.  You would have to weigh and measure all the food served, as well as all the food that wasn’t eaten.  Since most people are interested in being healthy for the rest of their life, and not just for the next 6 months, these studies would have to last 10 to 20 years to be really meaningful.  Not too many people would volunteer to be in a study that would lock them in a metabolic ward for 20 years!  (well, maybe Al Bundy would volunteer).  Even if you could do a study like this, it might not be relevant to the real world.  In a metabolic ward you have to stick to the diet; there is no other food choices.  In the real world, there are food choices on almost every street corner.  People have to be able to stay on a nutritional lifestyle long term for it to be useful. 

Very few controlled studies are done in nutritional science.  Almost all the studies we hear about are observational studies.  In these studies information is gathered on what people eat, usually with food frequency questionnaires.  These people are followed for a number of years and their health outcomes are observed.  Correlations are made between the food people ate and their health outcomes.  These types of studies have many limitations, the biggest being that they cannot provide any information on cause and effect.  Observational studies are useful for coming up with a new hypothesis, but then the hypothesis has to be tested in a Controlled Experiment.  A couple hypothetical studies from Tom Naughton's video might explain why this is.

If we were to do a study of the BMI of marathon runners compared to the BMI of the average person, you might find that running marathons is correlated with a lower BMI.  The conclusion of the study could say that running in marathons was linked, or correlated, or associated with a lower BMI.  They could not say that running in marathons causes you to become lean and have a lower BMI, even though most people hearing about this study would think that running marathons does cause people to have a lower BMI.  It conforms with our preconceived notions about exercise and weight, so we assume that causation is proved by the study.   If we look at another hypothetical study, it will illustrate why this is not the case.  If we did a similar study but used professional basketball players and height, we would find that playing basketball is correlated with being taller than the average person.  Does that mean that playing basketball causes you to grow taller?  If you are 5’6” and want to be 6’ tall, can you play basketball for a few years and expect to  grow?  Of course not!  Playing basketball doesn’t cause you to grow taller; it’s just that if you are tall you are more likely to play professional basketball.  The same logic can be used in the other study: Running marathons might not make you lean, it’s just that lean people are more likely to run marathons.  

When something in a study (A) is correlated with something else (B) it is easy to jump to the false conclusion that A is causing B.  A may or may not be causing B, as there is no way to know just from an observational study.  A may be causing B.  B may be causing A.  A third variable, C may be causing A and B.  This third variable C is what is called a confounding variable.  An example of this is that ice cream sales in Florida are correlated with shark attacks.  Does this mean that eating ice cream causes shark attacks?  Maybe the sharks like the ice cream dripping down your chin so they are more likely to attack.  Of course this is silly.  When it is hot, people eat more ice cream and they also go swimming more, which leads to more shark attacks.    Observational studies are full of confounding variables.  There are two main groups of people that influence the outcomes of nutritional studies.  There are people who are very health conscious and do whatever they can to be healthy, and those that do not care about their health and make food and lifestyle choices that are purely based on giving them pleasure.  People who are health conscious tend to have better health than people who are not health conscious.  They follow the health advice that has been generally given the last 40 years; they smoke less, drink less alcohol, exercise more, take vitamins, eat less calories, eat less sugar, eat less refined processed foods, and eat more vegetables.  Any one of these variables could be contributing to their good health, but from an observational study you can’t tell which one.  You would need a controlled experiment to do that.   

Another problem with observational nutritional studies is that the data collected is not very reliable.  It is usually collected using food frequency questionnaires.  In these studies people are asked to recall what they ate in the last day, month, year, or even four years.  You can find an example of a questionnaire here.  Most people can’t remember what they ate last Tuesday, let alone what they ate three years ago.  People who see themselves as healthy tend to overestimate foods they consider healthy and underestimate foods they consider unhealthy. Who wants to admit that their breakfast consisted of twinkies and oreos?  If the data that the study is based on is not accurate, then how useful is the study?

Some scientists can have such a strong belief in what the outcome of their study will be that they become biased.  Since there are so many confounding variables, you can make the outcome of a study say pretty much whatever you want it to say.  Most studies will try and account for these variables, but it is almost impossible to know exactly how much of a part each one played in someone’s health.  Some researchers will use a third variable to link two items when there is no direct link.  A good example of this is saturated fat, cholesterol and heart disease.  Many studies (such as Dr. Jolliffe's Anti-Coronary Club experiment mentioned here) will claim to show that saturated fat causes heart disease even if the data in their data shows that people who ate more saturated fat had a lower incidence of heart disease.  They do this by saying that saturated fat intake was associated with higher cholesterol, which is claimed to be a marker for heart disease. (Even though it has never been proven that high cholesterol causes heart disease)

This is why we get so many mixed messages from the so called “nutrition experts”.  The way Observational studies are reported, it gives the idea that they determine cause and effect when they really only show correlation. In one group of people a certain food may be correlated with high cancer rates, in another group they may be correlated with low cancer rates.  The truth may be that the food has NO impact on cancer rates.  As long as we remember the limitations of the study we won’t get sucked into these false assumptions.  So the next time you hear about the latest study telling you to stay away from a certain food, or that another food is a miracle cure, remember that 99% of these studies don’t actually prove anything.  

It is time to stop funding these types of observational studies.  How many studies do we need that give sensational headlines but do not add to our knowledge.  We have enough hypotheses about health and nutrition.  We need to start doing controlled experiments to find out which hypotheses about diet and nutrition result in healthy outcomes.  With obesity related diseases and health care costs skyrocketing, we need to find these answers now.