U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Viswanathan M, Berkman ND, Dryden DM, et al. Assessing Risk of Bias and Confounding in Observational Studies of Interventions or Exposures: Further Development of the RTI Item Bank [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2013 Aug.

Cover of Assessing Risk of Bias and Confounding in Observational Studies of Interventions or Exposures: Further Development of the RTI Item Bank

Assessing Risk of Bias and Confounding in Observational Studies of Interventions or Exposures: Further Development of the RTI Item Bank [Internet].

Show details

Methods

We convened a Working Group consisting of six systematic review experts, epidemiologists, and trialists and sought their input over the course of three planned conference calls and related email exchanges. When we had less than full attendance for conference calls, we arranged alternate calls for those who could not make the original date. Working Group members responded to call notes through numerous electronic interactions between conference calls.

We asked the Working Group members to comment on the goals and activities of the project. Their input resulted in an expansion of our original objective from further refinement of the RTI Item Bank to also include the development of a process framework to consider the effect of confounding across the body of observational study evidence.

As a precursor to the project, we reviewed the evidence on approaches to assessing the risk of bias in studies to understand how sources of bias might differ between RCTs and observational studies (Appendix A). A previously developed taxonomy of observational studies (Appendix B)30,31 offers an approach to grouping studies of similar designs, or with similar design features that may relate to bias. This characterization of study design features can be used by systematic reviewers to guide the choice of questions needed for risk of bias assessments of different observational study designs. Studies with different designs, or with different design features, may require (some) different questions for risk of bias assessments. For example, studies identified by the taxonomy as “noncomparative” include case reports and case series that have no comparison group. Therefore, questions to assess risk of bias must be selected with this in mind (e.g., not ask about comparability between groups). Another example of an important study design feature is whether both intervention/exposure and outcome assessment were prospective. When exposure/intervention status were identified retrospectively, there may be concerns about recall bias and misclassification that may be less worrisome in studies that collect and classify exposure information prospectively, particularly if they use standard definitions or criteria.

The study team reviewed the item bank to identify a core set of questions that were needed to evaluate the risk of confounding, bias, and lack of precision of individual studies for a specific, limited set of commonly used observational study designs (case series, case-control, cohort, and cross-sectional).

We revised the bank to consolidate questions addressing the same bias concerns (e.g., use of valid and reliable measures) and separated questions that were likely to be unnecessary because they were limited to study reporting, were redundant, or based on discussions with the Working Group, were considered not relevant for evaluating risk of bias or precision.

Based on the revised item bank, members of the Working Group were asked to rank their choice of specific questions that needed to be included in the item bank to evaluate the risk of bias, confounding, and lack of precision of an observational study for each of the four design types and the subset of questions that would only be relevant for an observational study of a particular design (case series, case-control, cohort, or cross-sectional). Working Group members were provided with the revised version of the bank, which included 16 of the original 29 questions. They were asked to evaluate the importance of each question using a five-point scale that included very important, somewhat important, a little important, not at all important, and not applicable/exclude. They were also asked if they agreed with the study team's recommendation to eliminate each of 10 questions. (One question had been eliminated based on earlier discussions with the Working Group concerning not needing a question establishing whether a study was prospective, retrospective, or mixed [independent of the conduct of the study] and two were eliminated based on question consolidations.) We also sought additional comments from Working Group members on each question concerning readability. We had intended to conduct a modified Delphi process with two rounds of voting, but terminated the process with a single round of voting because of a poor response rate (three of six participants voted).

This document was revised in response to peer review and public comment.

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (900K)

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...