By Dixie Vogel on February 17, 2020

Scientific Research

Sorting through fact and fiction

scientific research

Imagine spotting the headline, "Study shows chocolate accelerates weight loss!"

If you continued reading, you'd learn a team of German researchers from the non-profit Institute of Diet and Health published a weight loss-study in the International Journal of Medicine. Daily doses of dark chocolate were found to accelerate weight loss by 10 percent and lower cholesterol.

This was a real study, reported with similar headlines in 2015. I remember seeing articles about it widely shared on social media. Before you start stocking up on candy bars, there's some bad news.

The conclusions are bogus, even though the study itself wasn't fabricated. Instead, it was "engineered."

A Harvard biologist and science writer, John Bohannon, partnered with filmmakers working on a documentary about junk science in the diet industry to create the study and to prove a point.

A medical doctor did conduct an actual clinical trial. Subjects were recruited and randomly assigned different diets. Statistically significant benefits from eating chocolate were found, and the results were published accurately.

"It was, in fact, a fairly typical study for the field of diet research," Bohannon says. "Which is to say: It was terrible science. The results are meaningless, and the health claims that the media blasted out to millions of people around the world are utterly unfounded." 

For many non-scientists, reading the research paper wouldn't have set off alarms. So learning more about how this prank was pulled off can help us better evaluate research in the quest to become our own food gurus.

Let’s break it down piece by piece.

Was the Study Well Designed?

Sound experimental design requires controlling for as many factors that could impact the outcome as possible. The chocolate study assigned diets and sent subjects off with instructions but didn't monitor the food eaten. No effort was made to address additional factors expected to impact weight, such as calories consumed or activity levels. Ignoring reasonable alternative explanations for an outcome is a major reason to be suspicious.

How subjects are selected and assigned test conditions, how measurements are taken, the wording on questionnaires, and even unspoken expectations of the researchers can all compromise results. Ethical researchers realize this danger and attempt to minimize the impact of external factors. Expect a well-designed study to address this possibility and describe measures taken to counter it.

Who is Behind the Research?

Where a study was conducted, who authors the study, and where it was published can provide useful clues regarding credibility.

Studies conducted at accredited academic institutions must meet quality and ethical standards before being allowed to proceed. Studies done outside a university setting may certainly be worthwhile, but it's worth investigating the sponsoring organization and lead authors to uncover potential conflicts of interest.

Although a vested interest isn't hard proof a study is bogus, you can expect institutions conducting legitimate research to require disclosure. Discovering conflicts of interest that aren't transparently presented is a strong call to question credibility.

Research authors are positioned as experts, so a history of scientific publications and a degree in a related field are reasonable factors to consider as signals of expertise. In the chocolate study, "Johannes Bohannon" had no prior publications because the pseudonym was invented for the hoax. A Google Scholar search can help locate prior publications, although it doesn't verify the quality of the publishing journal.

In fact, the publishing journal matters. We depend on scientific journals to employ peer review, a process where scientists with appropriate expertise critically evaluate research submissions. The flaws in the chocolate study should have been identified in the peer-review process long before it was accepted for publication.

Unfortunately, there are a growing number of scientific-sounding journals that claim to employ peer review but seem willing to publish any submission provided the author pays a fee.  Such a business model allows people with financial or ideological agendas to produce impressive-sounding research citations without impressive science. A pay-for-publication journal published the chocolate study.

It's not always obvious which journals are which, even to experienced scholars, so there are lists maintained to help identify those not employing peer review. Although far from foolproof, an online search with the name of a publication and the term "predatory journal" can aid in discovering if a journal has been identified as questionable.

Evaluating Research for Yourself

It's important to evaluate research with an eye toward skepticism. This is particularly true making personal decisions based on science. Even well-intentioned researchers can suffer from unintentional bias. Solid science will weather healthy scrutiny.

While the thought of trudging through dense research papers may be intimidating, the chocolate hoax is a strong reminder it's simply not enough to accept media summaries as fact. It's not necessary to master the intricacies of statistical analysis or understand every word of a given research paper to develop an informed opinion about its legitimacy.

In the chocolate study, there were many indications something was amiss. The lack of control for known factors that impact weight, the short duration of the study and the small number of participants are all clear nudges to question conclusions. The chocolate study's heavy reliance on statistical significance without presenting the raw data or employing additional, more robust statistical tests add to the list of causes for skepticism.

Anyone who delved deeper into the chocolate study might have discovered the primary author had no prior scholarly publications. They may also have realized the sponsoring organization was nothing more than a website front without any history, or even found reason to question the publishing journal's integrity.

Finally, those dedicated enough to embark upon a review of scientific literature would have not located a body of work drawing similar conclusions. No matter how appealing the results, a single study does not make for reliable scientific evidence.

Identifying Good Research

Unfortunately, there isn't a single indicator that will consistently distinguish good science from impressive-looking noise. The topic of the study helps determine best practices, and it will vary. But there are many readily identifiable criteria that can boost confidence in a study's validity. A good study may not meet every one of these guidelines, but it should meet most of them.

  • The research appears to be conducted at an established institution with a verifiable history of scientific work.
  • The research appears to be led by a qualified author with a history of related work.
  • The research appears to be conducted by parties without hidden stakes in the results.
  • The research appears to be published in a peer-reviewed, scientific journal.
  • A sizable group of subjects are sampled. While the experimental design and statistical models employed will determine how many subjects are required for meaningful conclusions, usually more is better.
  • Subjects have been randomly selected and appear representative of the group being studied.
  • Subjects have been randomly assigned test conditions and if possible, followed over an extended period.
  • Test procedures are fully described, with attempts to control for any external or confounding variables clearly outlined.
  • A full reporting of the raw data, including measurements taken and statistical analysis are presented, with reasonable explanation for any data points excluded from consideration.
  • If statistical significance between groups is reported, additional statistical tests are also part of the overall analysis.
  • Statistical significance is not presented as proof of the original hypothesis.
  • Results are mostly consistent with existing research, or a rational explanation is offered otherwise. Results have been (or at least could be) replicated.

As Bohannon asserts, "If some news article seems to be giving you diet advice—a bold claim like eating this or not eating that is good or bad—you don't have to read further. The scientific consensus has not crystallized around diet and human health outcomes to the degree that you can make any claims yet." 

Regardless of the field of research, however, it makes good sense to require more compelling proof for more dramatic claims. By becoming research-savvy, we can avoid being fooled and reserve votes of confidence for sound science. Then, we at least have the option to make decisions well-grounded in what has truly been discovered about the world so far.

Learn more how statistics can fool us here.

  • Dixie Vogel headshot

    Dixie Vogel

    Dixie Vogel is a writer born and raised in northeast Kansas. Though she didn't grow up on a full-fledged farm, her earliest memories feature cows, pigs and poultry from her family's hobby farm. Dixie retained a deep respect for the work of farmers and ranchers, and seeks to share their stories and insight on modern... Read more