If a marketing team is absent someone who understands the research and science behind a product, problems with messaging and labeling can ensue.

Risa Schulman, PhD, President

August 11, 2015

5 Min Read
The fallacy of the PubMed punch-up: Don’t let the numbers fool you

Most product developers know that research is the underpinning of any dietary supplement ingredient and the foundation for developing a marketing story. But only a portion of companies have a science person on the development team to help them really understand what that research is saying. This can lead to a whole slew of problems when developing messaging points and labeling. Leaving aside unsubstantiated or non-compliant claims (which are prone to happen when the science is not understood well), I wanted to focus on one particular inaccuracy which I see all the time. I call it the PubMed punch-up.

It goes like this: Someone involved with developing the marketing copy takes the name of the ingredient in question and a health benefit, such as, “ingredient X and heart,” puts it into the search box in PubMed and clicks “enter.” Up comes the number of references the search engine found for that term. A sufficiently impressive number then becomes the talking point: “100s of peer reviewed scientific studies on ingredient X,” or, more boldly, “100s (or 1000s!) of studies proving ingredient X’s benefit.”

Now, if this is true, there is nothing wrong with these statements (aside from the use of the word “proving,” which we’ll leave for another day), but most often, things are hairier than that. If you dig deeper into those studies, usually only a fraction of them talk about the benefit of interest, and an even smaller fraction is relevant for substantiating the claim.

Let’s take an example. I searched on the name of a fairly well-known, moderately well-researched ingredient in the area of joint health that I otherwise chose at random. The name alone brought up 431 references. The name plus the joint health benefit brought up 96 references. Then I dug in. I considered any study involving joint function, inflammation and pain to be relevant to the claim of interest.

Out of 96 references there were:

  • 6 human studies testing the benefit of interest

  • 0 animal studies testing the benefit of interest

  • 2 in vitro studies testing the benefit of interest

  • 20 Reviews/Overviews

and

  • 17 human studies on the benefit of interest, but using other ingredients

  • 33 studies using the ingredient, but for other benefits

  • 18 “other” (safety [3], study protocol with no data [1], complementary health use surveys [3], ethnobotany [7], veterinary use [1], totally unrelated [3]).

For the purposes of naming the number of studies that show the efficacy of the ingredient for the benefit of interest, the total is six.  Maybe eight depending on the nature of the in vitro studies. (A reminder that review/overview articles are excellent tools and can be useful for substantiation, but mostly recapitulate already published data. Meta-analyses, on the other hand, can offer additional data.)

And, most properly, the six human studies would have to be evaluated individually to determine whether they were well done and whether the data strongly support efficacy for the benefit of interest. But even so, six human studies is actually a very nice number to begin to compile substantiation for a claim. However, it is far away from the original 96, or 431.

Here is another tally that provides a different, but typical, kind of pattern, this one for a well-known heart health ingredient, on which I searched, “ingredient and heart.” I came up with 250 hits. In my review of the references, I cast a very wide net, including anything related to arteries, circulation, blood pressure, cholesterol, antioxidant activity and pertinent blood components.

There were:

  • 5 human studies testing the benefit of interest

  • 9 animal studies testing the benefit of interest

  • 1 in vitro study testing the benefit of interest

  • 15 related reviews/overviews

and

  • 2 human studies using the ingredient, but for other benefits

  • 23 animal studies using the ingredient, but for other benefits

  • 13 in vitro studies using the ingredient, but for other benefits

  • 18 unrelated reviews/overviews

  • 49 ”other” (adverse events [1], metabolic constituents [1], processing [1], testing somewhat related ingredients [8], testing completely unrelated ingredients [14], totally unrelated topics [24])

The total number of studies out of 250 that show the efficacy of the ingredient for the given health benefit?  Five, with a maximum of 15 if these particular animal and in vitro studies are useful for claim substantiation. (Animal studies can sometimes provide important mechanism of action data, but for the purposes of showing efficacy of an ingredient in humans, they are usually of marginal import.)

For this example, I broke out the studies in more fine detail, to underscore the necessity of going through the studies of a search carefully to see what they actually cover. Of the total human, animal and in vitro studies cataloged above, fully 80 percent were unrelated to the benefit of interest, and at least 30 percent were unrelated to the ingredient at all.

(You may have noticed that the total number of references in the bullet points for this example equals only 135 and not 250.  After reference number 135, the refs were so far off the mark that it didn’t add anything to catalog them further. Which drives the percentages just noted above even higher.)

This is not to say that there aren’t ingredients that are very well researched and have higher numbers of relevant studies (like maybe 10-30 relevant human studies, some good meta-analyses and detailed mechanism of action papers), but most likely these have many hundreds or thousands of hits (I spared myself cataloging so many!) and the pattern seen above would remain the same (comment below if you want to find out why).

So, I think you can now understand why pulling off in the middle of a meeting to punch up the PubMed results of a search to prove a point about how much research there is, most often does not guarantee the level of evidence you think it does.

And, you can also understand why, when I see “100s of studies showing ingredient X works” (and certainly if it says 1000s!), what I read is, “I am sensationalizing my science and haven’t had an expert look at this!”

what-do-you-think_sans4.gif


What other issues can arise when the science and research isn't well understood by a company's entire team?

About the Author(s)

Risa Schulman

PhD, President, Tap~Root

Risa Schulman, PhD, is a functional food and dietary supplement expert, professional speaker and writer. Drawing on 18 years of experience on the leadership teams of companies such as POM Wonderful, Solgar Vitamins and Mars Botanical (a division of Mars, Inc.) and 7 years as a consultant, she now heads Tap~Root consulting firm. Dr. Schulman and her team assist prominent and pioneering food, dietary supplement and cosmeceuticals companies, ingredient suppliers and companies shifting into these spaces with straddling the science-regulatory-marketing challenges of product development and launch. The company also provides technical and scientific expertise to companies investing in or consulting with these industries, including law firms, investment companies and design firms. 

Subscribe and receive the latest updates on trends, data, events and more.
Join 57,000+ members of the natural products community.

You May Also Like