U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Cover of The Empirical Evidence of Bias in Trials Measuring Treatment Differences

The Empirical Evidence of Bias in Trials Measuring Treatment Differences

Methods Research Reports

Investigators: , PhD, MLIR, , PhD, , PhD, and , PhD.

Author Information and Affiliations
Rockville (MD): Agency for Healthcare Research and Quality (US); .
Report No.: 14-EHC050-EF

Structured Abstract

Objectives:

To comprehensively and systematically review and compare empirical evaluations of specific types of bias on effect estimates in randomized controlled trials (RCTs) reported in systematic reviews.

Data sources:

MEDLINE®, the Cochrane Library, and the Evidence-based Practice Center methods library located at the Scientific Resource Center. Additional studies were identified from reference lists and technical experts. We included meta-epidemiological studies (studies drawing from multiple meta-analyses), meta-analyses, and simulation studies (in relation to reporting bias only) intended primarily to examine the influence of bias on treatment effects in RCTs.

Review methods:

Approaches to minimizing potential biases considered in the review included selection bias through randomization (sequence generation and allocation concealment); confounding through design or analysis; performance bias through fidelity to the protocol, avoidance of unintended interventions, patient or caregiver blinding and clinician or provider blinding; detection bias through outcome assessor and data analyst blinding and appropriate statistical methods; detection/performance bias through double blinding; attrition bias through intention-to-treat analysis or other approaches to accounting for dropouts; and reporting bias through complete reporting of all prespecified outcomes. Two people independently selected, extracted data from, and rated the quality of included studies. We did not pool the results quantitatively due to the heterogeneity of included studies.

Results:

A total of 38 studies of trials (48 publications) met our inclusion criteria, from our review of 4,844 abstracts. Of these, 35 had usable evidence. Some studies concerned the effect of more than one type of bias on effect estimates. We reviewed 23 studies on allocation concealment, 14 studies on sequence generation, 2 studies on unspecified bias in randomization, 2 studies on confounding, 2 studies on fidelity to protocol and unintended interventions, 4 studies on patient and/or provider blinding, 8 studies on assessor blinding, 2 studies on appropriate statistical methods, 18 studies on double blinding, 15 studies on attrition bias, and 9 studies on selective outcome reporting.

Although a trend toward exaggeration of treatment effects was seen across bodies of evidence for most biases, the magnitude and precision of the effect varied widely across studies. We generally found evidence that was precise and consistent in direction of effect for assessor and double blinding, specifically in relation to subjective outcomes, and for selective outcome reporting. Evidence was generally consistent in direction of effect but with variable precision across studies for allocation concealment, sequence generation, and assessor blinding of objective or mixed outcomes. In contrast, evidence was generally inconsistent and imprecise in relation to confounding, adequate statistical methods, fidelity to the protocol, patient/provider blinding, and attrition bias.

Studies differed markedly on a number of dimensions including measures/scales used to measure biases, the thoroughness of reporting of trial conduct that was required, approaches to statistical modeling and adjustment for potential confounding, types of outcomes and stratification by treatment or condition. Within many epidemiological studies, the included meta-analyses or trials varied along these dimensions as well.

Conclusions:

Theory suggests that bias in the conduct of studies would influence treatment effects. Our review found some evidence of this effect in relation to some aspects of RCT study conduct. When the bias was present, commonly the treatment effect was increased, but rarely were the estimates precise in the individual studies. However, because this evidence is limited and uncertain with respect to the magnitude of the impact, this does not necessarily imply that systematic reviewers can eliminate assessment of risk of bias. Due to the complexity of evaluating precision in meta-epidemiological studies developed from potentially heterogeneous meta-analyses or trials, we cannot be sure that studies were sufficiently powered. We suggest that systematic reviewers consider subgroup analyses, with and without studies with flaws in relation to specific biases of importance for review questions. Future studies evaluating the impact of biases on treatment effect should follow the lead of the BRANDO study and use modeling approaches that include careful construction of large datasets of trials (and eventually observational studies) designed to look at the effect of specific aspects of study conduct and the interrelationship between bias concerns.

Contents

Prepared for: Agency for Healthcare Research and Quality, U.S. Department of Health and Human Services1, Contract No. 290-2007-10056-I. Prepared by: RTI-UNC Evidence-based Practice Center, Research Triangle Park, NC

Suggested citation:

Berkman ND, Santaguida PL, Viswanathan M, Morton SC. The Empirical Evidence of Bias in Trials Measuring Treatment Differences. Methods Research Report. (Prepared by the RTI-UNC Evidence-based Practice Center under Contract No. 290-2007-10056-I.) AHRQ Publication No. 14-EHC050-EF. Rockville, MD: Agency for Healthcare Research and Quality; September 2014. www.effectivehealthcare.ahrq.gov/reports/final.cfm.

This report is based on research conducted by the RTI International-University of North Carolina Evidence-based Practice Center under contract to the Agency for Healthcare Research and Quality (AHRQ), Rockville, MD (Contract No. 290-2007-10056-I). The findings and conclusions in this document are those of the authors, who are responsible for its contents; the findings and conclusions do not necessarily represent the views of AHRQ. Therefore, no statement in this report should be construed as an official position of AHRQ or of the U.S. Department of Health and Human Services.

The information in this report is intended to help health care decisionmakers—patients and clinicians, health system leaders, and policymakers, among others—make well-informed decisions and thereby improve the quality of health care services. This report is not intended to be a substitute for the application of clinical judgment. Anyone who makes decisions concerning the provision of clinical care should consider this report in the same way as any medical reference and in conjunction with all other pertinent information (i.e., in the context of available resources and circumstances presented by individual patients).

This report may be used, in whole or in part, as the basis for development of clinical practice guidelines and other quality enhancement tools, or as a basis for reimbursement and coverage policies. AHRQ or U.S. Department of Health and Human Services endorsement of such derivative products may not be stated or implied.

None of the investigators have any affiliations or financial involvement that conflicts with the material presented in this report.

1

540 Gaither Road, Rockville, MD 20850; www​.ahrq.gov

Bookshelf ID: NBK253181PMID: 25392898

Views

Related information

Similar articles in PubMed

See reviews...See all...

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...