NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Structured Abstract
Objectives:
To comprehensively and systematically review and compare empirical evaluations of specific types of bias on effect estimates in randomized controlled trials (RCTs) reported in systematic reviews.
Data sources:
MEDLINE®, the Cochrane Library, and the Evidence-based Practice Center methods library located at the Scientific Resource Center. Additional studies were identified from reference lists and technical experts. We included meta-epidemiological studies (studies drawing from multiple meta-analyses), meta-analyses, and simulation studies (in relation to reporting bias only) intended primarily to examine the influence of bias on treatment effects in RCTs.
Review methods:
Approaches to minimizing potential biases considered in the review included selection bias through randomization (sequence generation and allocation concealment); confounding through design or analysis; performance bias through fidelity to the protocol, avoidance of unintended interventions, patient or caregiver blinding and clinician or provider blinding; detection bias through outcome assessor and data analyst blinding and appropriate statistical methods; detection/performance bias through double blinding; attrition bias through intention-to-treat analysis or other approaches to accounting for dropouts; and reporting bias through complete reporting of all prespecified outcomes. Two people independently selected, extracted data from, and rated the quality of included studies. We did not pool the results quantitatively due to the heterogeneity of included studies.
Results:
A total of 38 studies of trials (48 publications) met our inclusion criteria, from our review of 4,844 abstracts. Of these, 35 had usable evidence. Some studies concerned the effect of more than one type of bias on effect estimates. We reviewed 23 studies on allocation concealment, 14 studies on sequence generation, 2 studies on unspecified bias in randomization, 2 studies on confounding, 2 studies on fidelity to protocol and unintended interventions, 4 studies on patient and/or provider blinding, 8 studies on assessor blinding, 2 studies on appropriate statistical methods, 18 studies on double blinding, 15 studies on attrition bias, and 9 studies on selective outcome reporting.
Although a trend toward exaggeration of treatment effects was seen across bodies of evidence for most biases, the magnitude and precision of the effect varied widely across studies. We generally found evidence that was precise and consistent in direction of effect for assessor and double blinding, specifically in relation to subjective outcomes, and for selective outcome reporting. Evidence was generally consistent in direction of effect but with variable precision across studies for allocation concealment, sequence generation, and assessor blinding of objective or mixed outcomes. In contrast, evidence was generally inconsistent and imprecise in relation to confounding, adequate statistical methods, fidelity to the protocol, patient/provider blinding, and attrition bias.
Studies differed markedly on a number of dimensions including measures/scales used to measure biases, the thoroughness of reporting of trial conduct that was required, approaches to statistical modeling and adjustment for potential confounding, types of outcomes and stratification by treatment or condition. Within many epidemiological studies, the included meta-analyses or trials varied along these dimensions as well.
Conclusions:
Theory suggests that bias in the conduct of studies would influence treatment effects. Our review found some evidence of this effect in relation to some aspects of RCT study conduct. When the bias was present, commonly the treatment effect was increased, but rarely were the estimates precise in the individual studies. However, because this evidence is limited and uncertain with respect to the magnitude of the impact, this does not necessarily imply that systematic reviewers can eliminate assessment of risk of bias. Due to the complexity of evaluating precision in meta-epidemiological studies developed from potentially heterogeneous meta-analyses or trials, we cannot be sure that studies were sufficiently powered. We suggest that systematic reviewers consider subgroup analyses, with and without studies with flaws in relation to specific biases of importance for review questions. Future studies evaluating the impact of biases on treatment effect should follow the lead of the BRANDO study and use modeling approaches that include careful construction of large datasets of trials (and eventually observational studies) designed to look at the effect of specific aspects of study conduct and the interrelationship between bias concerns.
Contents
- Preface
- Acknowledgments
- Technical Expert Panel
- Peer Reviewers
- Background
- Methods
- Results
- Results of Literature Searches
- Overview of Included Studies
- Selection Bias: Allocation Concealment
- Selection Bias: Sequence Generation (Randomization)
- Confounding
- Performance Bias: Fidelity to Protocol, Unintended Interventions, or Cointerventions
- Performance Bias: Patient/Caregiver or Provider Blinding
- Detection Bias: Assessor Blinding
- Detection Bias: Valid Statistical Methods
- Detection/Performance Bias: Double Blinding
- Attrition Bias
- Reporting Bias: Selective Outcome Reporting
- Discussion
- Key Considerations Across Studies
- Specific Biases: Key Considerations
- Performance Bias: Fidelity to Protocol, Unintended Interventions or Cointerventions
- Performance/Detection Bias: Double Blinding
- Limitations of the Review Process
- Research Gaps and Implications for Future Research
- Implications for Reviewers
- References
- Appendix A Empirical Evidence of Bias Search Strategies
- Appendix B Screening and Abstraction Forms
- Appendix C Excluded Full-Text Article List
- Appendix D Detailed Study Characteristics
- Appendix E Study Quality Assessment
Prepared for: Agency for Healthcare Research and Quality, U.S. Department of Health and Human Services1, Contract No. 290-2007-10056-I. Prepared by: RTI-UNC Evidence-based Practice Center, Research Triangle Park, NC
Suggested citation:
Berkman ND, Santaguida PL, Viswanathan M, Morton SC. The Empirical Evidence of Bias in Trials Measuring Treatment Differences. Methods Research Report. (Prepared by the RTI-UNC Evidence-based Practice Center under Contract No. 290-2007-10056-I.) AHRQ Publication No. 14-EHC050-EF. Rockville, MD: Agency for Healthcare Research and Quality; September 2014. www.effectivehealthcare.ahrq.gov/reports/final.cfm.
This report is based on research conducted by the RTI International-University of North Carolina Evidence-based Practice Center under contract to the Agency for Healthcare Research and Quality (AHRQ), Rockville, MD (Contract No. 290-2007-10056-I). The findings and conclusions in this document are those of the authors, who are responsible for its contents; the findings and conclusions do not necessarily represent the views of AHRQ. Therefore, no statement in this report should be construed as an official position of AHRQ or of the U.S. Department of Health and Human Services.
The information in this report is intended to help health care decisionmakers—patients and clinicians, health system leaders, and policymakers, among others—make well-informed decisions and thereby improve the quality of health care services. This report is not intended to be a substitute for the application of clinical judgment. Anyone who makes decisions concerning the provision of clinical care should consider this report in the same way as any medical reference and in conjunction with all other pertinent information (i.e., in the context of available resources and circumstances presented by individual patients).
This report may be used, in whole or in part, as the basis for development of clinical practice guidelines and other quality enhancement tools, or as a basis for reimbursement and coverage policies. AHRQ or U.S. Department of Health and Human Services endorsement of such derivative products may not be stated or implied.
None of the investigators have any affiliations or financial involvement that conflicts with the material presented in this report.
- 1
540 Gaither Road, Rockville, MD 20850; www
.ahrq.gov
- NLM CatalogRelated NLM Catalog Entries
- Review Influence of reported study design characteristics on intervention effect estimates from randomised controlled trials: combined analysis of meta-epidemiological studies.[Health Technol Assess. 2012]Review Influence of reported study design characteristics on intervention effect estimates from randomised controlled trials: combined analysis of meta-epidemiological studies.Savović J, Jones H, Altman D, Harris R, Jűni P, Pildal J, Als-Nielsen B, Balk E, Gluud C, Gluud L, et al. Health Technol Assess. 2012 Sep; 16(35):1-82.
- Review Screening for Cognitive Impairment in Older Adults: An Evidence Update for the U.S. Preventive Services Task Force[ 2013]Review Screening for Cognitive Impairment in Older Adults: An Evidence Update for the U.S. Preventive Services Task ForceLin JS, O'Connor E, Rossom RC, Perdue LA, Burda BU, Thompson M, Eckstrom E. 2013 Nov
- Review Low-Dose Aspirin for the Prevention of Morbidity and Mortality From Preeclampsia: A Systematic Evidence Review for the U.S. Preventive Services Task Force[ 2014]Review Low-Dose Aspirin for the Prevention of Morbidity and Mortality From Preeclampsia: A Systematic Evidence Review for the U.S. Preventive Services Task ForceHenderson JT, Whitlock EP, O'Conner E, Senger CA, Thompson JH, Rowland MG. 2014 Apr
- Compelling evidence from meta-epidemiological studies demonstrates overestimation of effects in randomized trials that fail to optimize randomization and blind patients and outcome assessors.[J Clin Epidemiol. 2024]Compelling evidence from meta-epidemiological studies demonstrates overestimation of effects in randomized trials that fail to optimize randomization and blind patients and outcome assessors.Wang Y, Parpia S, Couban R, Wang Q, Armijo-Olivo S, Bassler D, Briel M, Brignardello-Petersen R, Gluud LL, Keitz SA, et al. J Clin Epidemiol. 2024 Jan; 165:111211. Epub 2023 Nov 7.
- A meta-epidemiological study to examine the association between bias and treatment effects in neonatal trials.[Evid Based Child Health. 2014]A meta-epidemiological study to examine the association between bias and treatment effects in neonatal trials.Bialy L, Vandermeer B, Lacaze-Masmonteil T, Dryden DM, Hartling L. Evid Based Child Health. 2014 Dec; 9(4):1052-9.
- The Empirical Evidence of Bias in Trials Measuring Treatment DifferencesThe Empirical Evidence of Bias in Trials Measuring Treatment Differences
Your browsing activity is empty.
Activity recording is turned off.
See more...