NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Newberry SJ, Ahmadzai N, Motala A, et al. Surveillance and Identification of Signals for Updating Systematic Reviews: Implementation and Early Experience [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2013 Jun.
Background
The question of how to determine when a systematic review needs to be updated is of considerable importance. Changes in the evidence can have significant implications for clinical practice guidelines and for clinical and consumer decisionmaking, which depend on up-to-date systematic reviews as their foundation. The rapidity with which new research findings accumulate makes it imperative that the evidence be assessed periodically to determine the need for an update. Identifying updating signals would be particularly useful to inform stakeholders when new evidence is sufficient to consider updates of comparative effectiveness reviews (CERs).1
Systematic reviews are commonly updated at a preset time after publication.2 For example, since 2002, the Cochrane Collaboration's policy has been to update Cochrane reviews every 2 years.3 Such updates involve an investment of time and effort that may not be appropriate for all topics. In 2005, 254 Cochrane updates performed in 2002 were compared with the original reviews from 1998. Only 23 (9 percent) had a change in conclusion, which supports use of a priority approach, rather than an automatic time-based approach, to determine the need for an update.4
The science of identifying signals for updating systematic reviews has been developing for the past decade. Prior to 2001, no explicit methods or criteria existed to determine whether evidence-based products remained valid or whether the evidence underlying them had been superseded by newer work. Since the late 1990s, the Agency for Healthcare Research and Quality (AHRQ) Evidence-based Practice Center (EPC) program has commissioned studies to develop methods to assess the need for updating evidence reviews. Two methods have been developed. First, the Southern California Evidence-based Practice Center (SCEPC), based at the RAND Corporation, conducted a study to determine whether AHRQ's clinical practice guidelines needed to be updated and how quickly guidelines go out of date. The SCEPC developed a method that combines expert opinion with an abbreviated search of the literature published since the original systematic review.5,6 In 2008, the SCEPC adapted its method to assess the need for updating the CERs that had been prepared to that point (hereafter referred to as “ the RAND method”).7 In parallel, a second method was devised at the University of Ottawa EPC (UOEPC). This method assessed the predictors of the need to update systematic reviews,8 and was then tested using 100 meta-analyses published from 1995 to 2005.9 The method did not involve external expert judgment, but instead relied on capturing a combination of quantitative and qualitative signals for the need to update a report (hereafter referred to as “the Ottawa method”).
A series of subsequent methods projects led to the development of the Surveillance Program. In early 2008, AHRQ determined that to meet their intended objectives, the Effective Health Care Program should assess the need for the CERs completed to that point to be updated. The SCEPC was tasked with conducting this assessment. As part of this project, the SCEPC proposed a model for a program of regular surveillance for AHRQ CERs.7
In 2010, AHRQ commissioned a pilot study to compare the results of the RAND and Ottawa methods for identifying signals for the need for updating. Chosen as test cases were three evidence reports on omega-3 fatty acids (omega-3 FA): the effectiveness of omega-3 FA for preventing and treating neurological disorders;10 the effectivenesss of omega-3 FA for preventing and treating cancer;11 and the effects of omega-3 FA on risk factors and intermediate markers for cardiovascular disease.12 The report concluded that the data support the use of either method, as, in general, they provide similar signals for the possible need to update systematic reviews.13,14 Additionally, the report hypothesized that a hybrid model may offer advantages over either individual model.
AHRQ then commissioned the current Surveillance Program to evaluate 42 CERs using the RAND and/or Ottawa methods for identifying signals indicating the need for updating. Figure 1 is a diagram illustrating the overall process of the Surveillance program developed and conducted by the Ottawa and RAND EPCs.
In brief, 6 months after the release of a CER, the CER topic undergoes a limited literature search (five general medical journals and five specialty journals). The researchers conducting the assessment abstract any relevant studies into evidence tables. At the same time, a combination of local subject matter experts and experts from the original report (members of the Technical Expert Panel or Peer Review Panel) are contacted and asked to review the original conclusions and share their awareness of any new findings that might change a conclusion and therefore prompt an update. If the original report included meta-analyses, evidence for a quantitative signal will be sought in the new studies. The findings from the literature review and expert poll are combined in a summary table, and signals for the need to update are then determined on a conclusion-by-conclusion basis and for the CER as a whole. The EPCs then prepare a mini-assessment with the original conclusions, summary table, and evidence table, and their recommendation as to whether the priority for updating the CER is low, medium, or high. This determination is based on the number and types of conclusions deemed out of date.