Junkfood Science: Figure Flaws — Did this diet really work?

June 30, 2007

Figure Flaws — Did this diet really work?

News services around the world have reported of a “Family-based weight program effective for children and teens.” Not surprisingly, the stories were all taken from the Yale University press release, announcing that its weight management program had been shown to be a success. But had it really?

The study said to prove the benefits of their weight program was just published in the Journal of the American Medical Association. It’s typical of the strengths and foibles found in diet studies.

These “figure flaws” will astound you. So let’s start from the top.


Study design and interventions

The news correctly reported that this was a randomized clinical trial of 209 fat children. But that’s where accuracy in the news ended.

The study participants were recruited from healthy fat (above the 95th percentile on CDC growth curves) children enrolled at the Yale Pediatric Obesity Clinic, seeking to lose weight. The children had to have an involved parent or caregiver, speak English, and have no health problems, physical or psychological. There was no true control group of similarly healthy fat children outside the clinical setting, without interventions.

The children, average age of 12, were randomized, with two-thirds to receive intensive weight management and the rest to get standard clinical weight management, with the promise of being enrolled in their Bright Bodies program after the study. Standard clinical intervention consisted of diet and exercise counseling by a registered dietician — which included the usual diet instructions to decrease juice and sweetened drinks, switch to low-fat milk and foods, read labels, bring healthy lunches to school, decrease sedentary activities, and get regular exercise — and physical, psychological counseling by a social worker.

The intensive intervention group was secondarily randomized into a structured diet plan group and a group undergoing Yale’s proprietary Bright Bodies program. However, 83% of the kids dropped out of the diet plan group during the first 6 months and the researchers had to abandon that arm of the study.

Bright Bodies is a program created ten years ago by dietician, Mary Savoye-Desanti. It uses a curriculum called Smart Moves which includes intensive nutrition classes to teach young people and their parents and caregivers, how to make “better food choices” with low-fat foods and portion control. Behavioral modification classes teach children to “become more aware of overeating triggers” and how to control urges to eat; and parents are taught to role model healthy behavioral change. And finally, “children are encouraged to exercise 30 to 40 minutes five times per week,” with high-intensity cardiovascular workouts supervised by exercise physiologists. According to Salon writer, Amy Benfer, this week, Bright Bodies doesn’t hesitate to confront really fat kids about their eating and exercise habits, believing being overweight is a health issue and requires aggressive intervention. As Ms Savoye said in response to those concerned about making kids feel bad about themselves by calling attention to the fact they’re fat, she believes that being overweight was worse for their self-esteem.

For the first six months of this study, the kids attended exercise sessions twice a week (100 hours per week) and were instructed to exercise at home three additional times a week; they and their families attended Smart Moves nutritional/behavioral sessions once a week; and the kids had weekly weigh-ins. During the final six months, they attended weekly exercise and nutrition/behavioral modification classes and biweekly weigh-ins.


So what happened?

Only 28 kids from the standard clinical weight management stayed in the program for the full year — meaning 60% dropped out.

Only 56 kids in the Bright Bodies intervention group were still in the study at the end of the year — meaning 53% dropped out.

So, this study ended up having data on only 84 of the original 209 participants — 40% of what they started out with! It would seem a lot easier to show favorable results, when one eliminates 60% of the unfavorable ones, wouldn’t it?

But the researchers included all of the original kids from both intervention groups in their analysis. How’d they do that?

The magic of computer modeling. Here’s how they described it:

Multiple imputation with data augmentation under the multivariate normal model using PROC MI from SAS was performed to impute missing outcome data. The details of this process are described by Allison. The imputation was conducted on continuous missing data with log transformations applied for normality where necessary.

Did you get that? :)

Dr. Joseph Schafer, associate professor at Department of Statistics and The Methodology Center at Pennsylvania State University, explained multiple imputation:

Imputation, the practice of ‘filling in’ missing data with plausible values, is an attractive approach to analyzing incomplete data. It apparently solves the missing-data problem at the beginning of the analysis. However, a naive or unprincipled imputation method may create more problems than it solves, distorting estimates, standard errors and hypothesis tests, as documented by Little and Rubin and others.

In Rubin's method for ‘repeated imputation’ inference, each of the simulated complete datasets is analyzed by standard methods, and the results are combined to produce estimates and confidence intervals that incorporate missing-data uncertainty. Rubin addresses potential uses of MI primarily for large public-use data files from sample surveys and censuses. With the advent of new computational methods and software for creating MI's, however, the technique has become increasingly attractive for researchers in the biomedical, behavioral, and social sciences whose investigations are hindered by missing data.

If this sounds to you like it’s a fancy way of saying they filled in the blanks with guesses, you’re not the only one.

The researchers said they also made assumptions that the missing data were at random and not due to anything observable in the study. Dr. Paul D. Allison, of the University of Pennsylvania and author of Missing Data, referenced in this study, however, cautioned that even when using the conditions described by Rubin: “The problem is that it’s easy to violate these conditions in practice. There are often strong reasons to suspect that the data are not missing at random. Unfortunately, not much can be done about this.” [To fill in missing data in their computer modeling, other diet researchers have carried forward the last known weight lost among drop-outs, assuming no regain.]

Concerning the significant percentage of missing data in this study, David L. Cassell, a mathematical statistician and Senior computing specialist for the EPA, and co-director of Design Pathways in Corvallis, OR, is considered an expert on the computer database model, SAS. He recommends caution to researchers when using the PROC MI: “I also have seen it misused. Are you missing just a few percent of the observations? Okay, then MI should work well for you. Are you missing 30% of the data points? If so, then nothing is going to solve all your problems, not even PROC MI. So be forewarned...”


The findings

Despite all of this, few would view the findings of this study as convincing evidence of effectiveness.

After 6 months of intense interventions in the Bright Bodies program, the kids had lost an average of 5.7 pounds, but during the second half of the year were rebounding, as is seen with all weight loss interventions. By the one year mark they were already nearly a pound above where they started. Conveniently, the study stopped there.

The single most important point to recognize about this weight loss study, like virtually all such studies promoting a certain diet or weight loss plan, is that it’s too short-term to credibly demonstrate effectiveness. Neither a sustained weight loss has been shown, nor have any of the temporary changes in health risk indices been shown to be equivalent to actual health outcomes. The children in this study were 12 years old at the beginning of the study and this is a period of especially rapid growth and development. Was yo-yo dieting of any benefit to their health?

As we’ve learned, the scientific evidence and obesity experts, including those with the National Institutes of Health and the Federal Trade Commission’s scientific expert committee, have concluded that no weight loss program can be evaluated until it demonstrates weight loss for at least five years because virtually all weight is regained within five years. That’s the rule, not the exception. Short-term diet studies under five years in length are little more than dieting stunts.

It is reasonable to ask why, if this program has been in existence for ten years, has there been no research to demonstrate its long-term effectiveness? These same researchers published a smaller, and nearly identical short-term study more than two years ago — how are those 25 children today? Instead of following children for years to see how their program has affected long-term health, they conducted another short-term study. Nor, has there been any study done to see if there’s been any long-term harmful effects on these children, such as self-esteem, eating disorders, academic performance or to their health from dieting and the invariable yo-yoing.

Ms Savoye-Desanti also admitted in the press release that the “expense incurred in operating such a program [as Bright Bodies Smart Moves] is substantial.” Future studies will be done on cost-benefit analysis, she promised.


The take home message is not found in this study or specific program, but in the realization that it is by no means unique for weight loss studies. You won’t find any long-term study of any weight loss intervention — even those programs and diets that have been around decades — that’s proven it to be safe and effective for weight loss or to improve actual clinical health and longevity — because there aren’t any.


© 2007 Sandy Szwarc

Bookmark and Share