Meta-analysis claims its latest victim: glucosamine

Meta-analysis claims its latest victim: glucosamine

By Duffy MacKay, N.D., Vice President, Scientific and Regulatory Affairs, Council for Responsible Nutrition

Every day millions of Americans search for relief from joint pain. Options run the gamut from prescription and OTC drugs to exercise, diet, and dietary supplements. None of these options are perfect for everyone and several carry risks that may Dr. Duffy MacKayoutweigh their benefit. Clinical studies have demonstrated that glucosamine and chondroitin can slow the loss of cartilage and fluid in joints and provide relief from pain. So glucosamine and chondroitin remain viable options as oral therapies with the best risk/benefit ratio when compared to other available products that address the gradual wear and tear on joints.  

Then along comes the most recent meta-analysis of glucosamine and chondroitin published in the British Medical Journal (Wandel et al. Effects of glucosamine, chondroitin, or placebo in patients with osteoarthritis of hip or knee: network meta-analysis. BMJ. 2010;341:c4675).  Despite having published the study, with its headline-grabbing recommendations:

  • "Coverage of costs by health authorities or health insurers for these preparations and novel prescriptions to patients who have not received other treatments should be discouraged."
  • "Health authorities and health insurers should not cover the costs of these preparations, and new prescriptions to patients who have not received treatment should be discouraged."

The British Medical Journal (BMJ) editors and statistical advisers conducted a post-publication analysis of the article at their annual review meeting.  The BMJ has now concluded that it does not support these statements from the article, declaring that they are not directly based on the article’s results. But the mea culpa is a little too late because the damage has been done.

The methodology of this meta-analysis was heavily (and for the record, justifiably) criticized for breaking the cardinal rule of meta-analysis, which is that you should not combine studies with significantly different designs that measure different outcomes, i.e., studies that are just too dissimilar to point to a single conclusion. In this case, the authors, armed with no new data, performed a mash-up of old data from some studies that were designed to evaluate pain outcomes with other ones that were designed to look at physical cartilage changes under an x-ray.  Furthermore the investigators excluded those randomized controlled trials (RCTs) involving less than 100 subjects as well as studies using lower doses. Of the 58 reports they identified in the literature, they included data from only 10 RCTs. This methodology conveniently excluded a large portion of the evidence base but left the authors with 10 very dissimilar studies.

Scientists perform meta-analyses because a single clinical trial can fail to give clear-cut results that can be generalized to a broad population, particularly if there are insufficient patient numbers from a single trial. Meta-analyses use statistical techniques to combine the results from similar trials to increase subject numbers and to support generalization of conclusions. Moreover, meta-analysis is not an exact statistical science that provides definitive clinical guidance for practitioners to complex clinical problems—like arthritis. Interpretation of meta-analyses is inherently limited by statistical issues that arise from the heterogeneity (design differences) between the studies that are analyzed. Experts agree that the extrapolation of meta-analysis findings to clinical practice must be done cautiously.

Undoubtedly, meta-analysis has its advantages over the more subjective narrative literature reviews. A well conducted meta-analysis using objective, quantitative methods for combining evidence from separate but similar studies can identify tendencies within the research and point toward overall trends in the literature. However, critics of meta-analysis have noted that they can be misused by employing statistical tricks to make unjustified assumptions or to produce oversimplified generalizations,…say, if your agenda was to try and demonstrate that a treatment does not work and therefore third-party payers should not reimburse for it. Perhaps the authors had underlying political motivations to try and discourage the British government from covering the cost of these supplements, especially since they are sold as drugs and are covered in other EU countries.

 It is refreshing to see that editors of BMJ decided to do the right thing and re-assess the policy recommendations of the study. They concluded that the BMJ does not support these particular statements from the article, and determined that those policy recommendations do not add usefully to the article or necessarily flow from the study’s results. Unfortunately, the damage has been done to the joint health category broadly by these over-reaching conclusions that have been disseminated in a respected journal and were widely reported in the popular consumer press that is unlikely to now report on BMJ’s editors’ clarification. At least we can applaud the BMJ editors for giving it a second look.

Read the post, Report from BMJ post publication review meeting, by BMJ's Deputy Editor

(The CRN Blog represents the view of the author and does not necessarily reflect the view of CRN’s Board of Directors or serve as an official position of the association.)

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.