The explosion of systematic reviews – have we gone too far?

A fascinating recent paper from Professor John Ioannidis from Stanford University, published in the Milbank Quarterly, has examined the explosion in the publication of systematic reviews and meta-analyses over recent years. Publications between 1991 and 2014 increased by 2,728% for systematic reviews, and 2,635% for meta-analyses – far in excess of the rate of publication of new randomised controlled trials. Of concern, meta-analyses frequently overlap in scope, published close together and without any additional or novel information. Sometimes these studies have identical conclusions, sometimes differences in inclusion criteria lead to differing conclusions resulting in confusion to the reader. Futhermore, Ioannidis estimates that up to 20% of meta-analyses go unpublished, and 20% are flawed beyond repair. He estimates that only around 3% are of decent quality and are clinically useful.

A number of ways in which reviews can be improved are discussed, including standardised reporting, prospective registration of review protocols, improvements in trial design and consistency of outcome reporting and greater availability of individual-patient data. The problem of duplication could be addressed by the use of collaborative tools that allow for continuously updated prospective meta-analyses as new data are published (such as open collaborative meta-analysis). Use of network meta-analysis tools would even allow the addition of data regarding new treatments for a condition as they become available.

One area where I disagree slightly with Ioannidis is his assertion that a high proportion (17%) of reviews are non-informative and hence not useful. Often, a systematic review will conclude that there are a paucity of quality data regarding an intervention, and that further high-quality studies are required. I have written previously about the Cochrane approach to such topics which often results in empty reviews (due to the the lack of randomised evidence). Whilst this approach is undoubtedly non-informative, a well performed narrative systematic review will not only inform the reader of the lack of evidence (an important finding in itself), but also examine the existing (often flawed) evidence to help inform the design of future trials.

Despite the reservations expressed in his paper, Ioannidis still sees value in properly-performed systematic reviews and meta-analyses performed by experienced teams in the absence of commercial bias and other interests. The challenge is to maintain the reputation of these important studies in the face of increasing dilution from biased and flawed reviews.

Leave a Reply