NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Structured Abstract
Objectives:
This project focused on Agency for Healthcare Research and Quality (AHRQ) methods guidance to its Evidence-based Practice Center (EPC) program on grading the strength of evidence (SOE) related to therapeutic interventions. Our project focused on inter-rater reliability testing of the two main components of the AHRQ approach to grading SOE for specific outcomes: (1) scoring evidence on the four required domains (risk of bias, consistency, directness, and precision), separately for randomized controlled trials (RCTs) and observational studies, and (2) developing an overall SOE grade, given the scores for the individual domains.
Data Sources and Methods:
We conducted inter-rater reliability testing using data obtained from two published CERs. We designed 10 exercises (5 positive outcomes [benefits] and 5 harms [adverse effects]); all 10 included RCTs, and 6 of the 10 included 1 or more observational studies.
Eleven pairs of reviewers (22 participants) participated in the exercises. Each reviewer independently completed each of the exercises; subsequently, each pair of reviewers reconciled their independent responses.
We calculated summary statistics to describe agreement among reviewers and their difficulty in making each rating assessment. We used logistic regression analysis to describe the relationship between domain scores and the final SOE grade, both in relation to the specific grade selected and level of agreement among reviewers. We examined the change in independent reviewer ratings following reconciliation among reviewer pairs.
Results:
The level of independent reviewer inter-rater agreement for domain scores varied considerably from substantial for RCT risk of bias and directness to slight for observational study risk of bias. Agreement on all other domains was either moderate or fair. Agreement was generally better for RCTs than observational studies and agreement among reconciled reviewer pairs was as good as or better than it was for individual independent reviewers.
Agreement on independent reviewer SOE grades was generally poorer than for domain scores. Overall agreement was slight and it was not appreciably better when limited to the exercises that included only RCTs. Neither agreement on domain scores nor agreement about the level of difficulty in evaluating particular domains predicted the overall SOE grades.
When evidence was limited to RCT studies, better SOE grades of moderate or high were related to RCT domain scores’ being considered consistent and precise. The inclusion of observational studies, in addition to RCTs, in an exercise was a strong predictor of a poorer SOE grade — namely, either insufficient or low.
Conclusions:
Our findings demonstrate that the conclusions reached by experienced reviewers based on the same evidence can differ greatly, particularly when they are faced with bodies of evidence that do not lend themselves to meta-analysis and they need to rely more heavily on their own judgment. Of particular concern is how to deal with (a) outcomes that are evaluated through a combination of RCTs and observational studies, (b) outcomes that are evaluated through more than one measure and (c) grading evidence that appears to show no difference.
We conclude that additional methodological guidance is needed, including more details and examples, supported by more training, particularly on how best to evaluate the “thornier” bodies of evidence as discussed above. However, some potential will always exist for disagreement even among experienced reviewers. EPC reviewer teams need to be transparent in how they have conducted this task. This will help to ensure that stakeholders can be confident of their interpretation of the evidence.
Our study provided only a first approximation of reviewers’ rationales for differences in SOE decisions. Additional research is needed to understand gaps in guidance that should be filled, areas of insufficient understanding of the guidance itself and how best to overcome that deficit, and complex decisions that may still need to be left to the review team’s substantive expertise.
Contents
Prepared for: Agency for Healthcare Research and Quality, U.S. Department of Health and Human Services1, Contract No. 290-2007-10056-I, Prepared by: RTI International–University of North Carolina Evidence-based Practice Center,Research Triangle Park, NC
Suggested citation:
Berkman ND, Lohr KN, Morgan LC, Richmond E, Kuo TM, Morton S, Viswanathan M, Kamerow D, West S, Tant E. Reliability Testing of the AHRQ EPC Approach to Grading the Strength of Evidence in Comparative Effectiveness Reviews. Methods Research Report. (Prepared by RTI International–University of North Carolina Evidence-based Practice Center under Contract No. 290-2007-10056-I.) AHRQ Publication No. 12-EHC067-EF. Rockville, MD: Agency for Healthcare Research and Quality. May 2012. www.effectivehealthcare.ahrq.gov/reports/final.cfm.
This report is based on research conducted by the RTI International–University of North Carolina Evidence-based Practice Center (EPC) under contract to the Agency for Healthcare Research and Quality (AHRQ), Rockville, MD (Contract No. 290-2007-10056-I). The findings and conclusions in this document are those of the authors, who are responsible for its contents; the findings and conclusions do not necessarily represent the views of AHRQ. Therefore, no statement in this report should be construed as an official position of AHRQ or of the U.S. Department of Health and Human Services.
The information in this report is intended to help health care decisionmakers—patients and clinicians, health system leaders, and policymakers, among others—make well-informed decisions and thereby improve the quality of health care services. This report is not intended to be a substitute for the application of clinical judgment. Anyone who makes decisions concerning the provision of clinical care should consider this report in the same way as any medical reference and in conjunction with all other pertinent information, i.e., in the context of available resources and circumstances presented by individual patients.
This report may be used, in whole or in part, as the basis for development of clinical practice guidelines and other quality enhancement tools, or as a basis for reimbursement and coverage policies. AHRQ or U.S. Department of Health and Human Services endorsement of such derivative products may not be stated or implied.
- 1
540 Gaither Road, Rockville, MD 20850; www
.ahrq.gov
- NLM CatalogRelated NLM Catalog Entries
- Review Validity and Inter-Rater Reliability Testing of Quality Assessment Instruments[ 2012]Review Validity and Inter-Rater Reliability Testing of Quality Assessment InstrumentsHartling L, Hamm M, Milne A, Vandermeer B, Santaguida PL, Ansari M, Tsertsvadze A, Hempel S, Shekelle P, Dryden DM. 2012 Mar
- Review A Process for Robust and Transparent Rating of Study Quality: Phase 1[ 2011]Review A Process for Robust and Transparent Rating of Study Quality: Phase 1Ip S, Kitsios GD, Chung M, Lau J. 2011 Nov
- Grading the strength of a body of evidence when assessing health care interventions: an EPC update.[J Clin Epidemiol. 2015]Grading the strength of a body of evidence when assessing health care interventions: an EPC update.Berkman ND, Lohr KN, Ansari MT, Balk EM, Kane R, McDonagh M, Morton SC, Viswanathan M, Bass EB, Butler M, et al. J Clin Epidemiol. 2015 Nov; 68(11):1312-24. Epub 2014 Dec 20.
- Interrater reliability of grading strength of evidence varies with the complexity of the evidence in systematic reviews.[J Clin Epidemiol. 2013]Interrater reliability of grading strength of evidence varies with the complexity of the evidence in systematic reviews.Berkman ND, Lohr KN, Morgan LC, Kuo TM, Morton SC. J Clin Epidemiol. 2013 Oct; 66(10):1105-1117.e1.
- Review Low-Dose Aspirin for the Prevention of Morbidity and Mortality From Preeclampsia: A Systematic Evidence Review for the U.S. Preventive Services Task Force[ 2014]Review Low-Dose Aspirin for the Prevention of Morbidity and Mortality From Preeclampsia: A Systematic Evidence Review for the U.S. Preventive Services Task ForceHenderson JT, Whitlock EP, O'Conner E, Senger CA, Thompson JH, Rowland MG. 2014 Apr
- Reliability Testing of the AHRQ EPC Approach to Grading the Strength of Evidence...Reliability Testing of the AHRQ EPC Approach to Grading the Strength of Evidence in Comparative Effectiveness Reviews
- Xanthomonas vesicatoria strain 15b out_74, whole genome shotgun sequenceXanthomonas vesicatoria strain 15b out_74, whole genome shotgun sequencegi|734008346|gb|JSXZ01000074.1||gnl JSXZ01|out_74Nucleotide
- Xanthomonas vesicatoria strain 53M out_49, whole genome shotgun sequenceXanthomonas vesicatoria strain 53M out_49, whole genome shotgun sequencegi|734009735|gb|JSYJ01000049.1||gnl JSYJ01|out_49Nucleotide
- Xanthomonas vesicatoria strain 53M out_51, whole genome shotgun sequenceXanthomonas vesicatoria strain 53M out_51, whole genome shotgun sequencegi|734009609|gb|JSYJ01000051.1||gnl JSYJ01|out_51Nucleotide
Your browsing activity is empty.
Activity recording is turned off.
See more...