Most product developers know that research is the underpinning of any dietary supplement ingredient and the foundation for developing a marketing story, but only a portion of companies have a science person on the development team to help them really understand what that research is saying. Leaving aside unsubstantiated or non-compliant claims (which are prone to happen when the science is not understood well), we need to focus on one particular inaccuracy which I see all the time.
I call it the PubMed Punch Up. (View the infographic here)
Someone involved with developing the marketing copy takes the name of the ingredient in question and a health benefit, such as, “ingredient X and heart,” puts it into the search box in PubMed and clicks “enter.” Up comes the number of references the search engine found for that term. A sufficiently impressive number then becomes the talking point, “100’s of peer reviewed scientific studies on ingredient X” or, more boldly, “100’s (or 1000’s!) of studies proving ingredient X’s benefit.”
Now, if this is true, there is nothing wrong with these statements (aside from the use of the word “proving,” which we’ll leave for another day), but most often, things are hairier than that.
If you dig deeper into those studies, usually only a fraction of them talk about the benefit of interest, and an even smaller fraction is relevant for substantiating the claim.
Let’s take an example. I searched on the name of a fairly well-known, moderately well-researched ingredient in the area of joint health that I otherwise chose at random. The name alone brought up 431 references. The name plus the joint health benefit brought up 96 references. Then I dug in. I considered any study involving joint function, inflammation and pain to be relevant to the claim of interest.
Out of 96 references there were:
• 6 human studies testing the benefit of interest
• 0 animal studies testing the benefit of interest
• 2 in vitro studies testing the benefit of interest
• 20 Reviews/Overviews
• 17 human studies on the benefit
of interest, but using other ingredients
• 33 studies using the ingredient,
but for other benefits
• 18 “other” (safety ,
study protocol with no data , complementary health use surveys , ethnobotany , veterinary use , totally unrelated ).
For the purposes of naming the number of studies that show the efficacy of the ingredient for the benefit of interest, the total is: six. Maybe eight depending on the nature of the in vitro studies. Review/overview articles are excellent tools and can be useful for substantiation, but mostly recapitulate already published data. Meta-analyses, on the other hand, can offer additional data.
Ideally, the six human studies would have to be evaluated individually to determine whether they were well done and whether the data strongly support efficacy for the benefit of interest. But even so, six human studies is actually a very nice number to begin to compile substantiation for a claim. However, it is far away from the original 96, or 431.
Here is another tally that provides a different, but typical, kind of pattern, this one for a well-known heart health ingredient, on which I searched, “ingredient and heart.” I came up with 250 hits. In my review of the references, I cast a very wide net, including anything related to arteries, circulation, blood pressure, cholesterol, antioxidant activity and pertinent blood components.
• 5 human studies testing the benefit of interest
• 9 animal studies testing the benefit of interest
• 1 in vitro study testing the benefit of interest
• 15 related reviews/overviews
• 2 human studies using the
ingredient, but for other benefits
• 23 animal studies using the
ingredient, but for other benefits
• 13 in vitro studies using the
ingredient, but for other benefits
• 18 unrelated reviews/overviews
• 49 ”other” (adverse events ,
metabolic constituents ,
processing , testing somewhat related ingredients , testing
completely unrelated ingredients , totally unrelated topics )
The total number of studies out of 250 that show the efficacy of the ingredient for the given health benefit? Five, with a maximum of 15 if these particular animal and in vitro studies are useful for claim substantiation. Animal studies can sometimes provide important mechanism of action data, but for the purposes of showing efficacy of an ingredient in humans, they are usually of marginal import.
For this example, I broke out the studies in more fine detail, to underscore the necessity of going through the studies of a search carefully to see what they actually cover. Of the total human, animal and in vitro studies catalogued above, fully 80% were unrelated to the benefit of interest, and at least 30% were unrelated to the ingredient at all.
(You may have noticed that the total number of references in the bullet points for this example equals only 135 and not 250. After reference number 135, the refs were so far off the mark that it didn’t add anything to catalog them further, which drives the percentages just noted above even higher.)
This is not to say that there aren’t ingredients that are very well researched and have higher numbers of relevant studies (like maybe 10-30 relevant human studies, some good meta-analyses and detailed mechanism of action papers), but most likely these have many hundreds or thousands of hits and the pattern seen above would remain the same.
So, I think you can now understand why pulling off in the middle of a meeting to punch up the PubMed results of a search to prove a point about how much research there is, most often does not guarantee the level of evidence you think it does.
And, you can also understand why, when I see, “100’s of studies showing ingredient X works” (and certainly if it says 1000’s!) what I read is, “I am sensationalizing my science and haven’t had an expert look at this!”
But the implications of what I‘ve outlined here go deeper than just the claims themselves. The deadliest part of the PubMed punch up, and other superficial methods of surveying the literature, is its implications for detrimental decision making regarding overall product efficacy.
I’ve had clients who when I tell them that the research isn’t strong enough to support the main claim they’re going out with, say to me, “but I have a whole file of papers on this!” Problem is, they didn’t know how to understand and interpret those papers in the context of scientific efficacy and regulatory structure. This impacts product platform, main messaging, and even target market. It’s especially painful if this conversation comes up close to the launch date.
In the current climate where we are working hard to establish industry credibility by focusing on transparency and product quality, we must not overlook a twin concern of product integrity: evidence for efficacy. Without it, a product with an impeccable source pedigree will still fall short of the goal of bringing a superior offering that delivers on its promises. Bottom line: you can’t develop a great product if you don’t understand your science.
This undertaking calls for a look at a highly specialized pool of talent that is in short supply: people who have a deep command of science but also of regulatory principles, and who can marry them together with compelling marketing in mind. This allows for integrated and solid development of the product platform and its resultant messaging, and should be done very early in the process. It is preferable to schedule the “have it out” meetings with each of the R&D, regulatory, marketing and legal teams present, since it is more a product of synthesis than of compromise.
We need a concerted effort to train more people to be able to walk the tightrope wearing both science and regulatory shoes at the same time. It is easier to train those who already have a strong science background in the regulatory parameters than the other way around. Barring that, there are senior members of the industry who have a firm handle on this nexus and can be brought in as consultants. Either way, it is an essential step in elevating ourselves to the gold standard that we are so striving to meet.