NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Structured Abstract
Objectives:
To examine the empirical evidence for associations between a set of proposed quality criteria and estimates of effect sizes in randomized controlled trials across a variety of clinical fields and to explore variables potentially influencing the association.
Methods:
We applied quality criteria to three large datasets of studies included in a variety of meta-analyses covering a wide range of topics and clinical interventions consisting of 216, 165, and 100 trials. We assessed the relationship between quality and effect sizes for 11 individual criteria (randomization sequence, allocation concealment, similar baseline, assessor blinding, care provider blinding, patient blinding, acceptable dropout rate, intention-to-treat analysis, similar cointerventions, acceptable compliance, similar outcome assessment timing) as well as summary scores. Inter-item relationships were explored using psychometric techniques. We investigated moderators and confounders affecting the association between quality and effect sizes across datasets.
Results:
Quality levels varied across datasets. Many studies did not report sufficient information to judge methodological quality. Some individual quality features were substantially intercorrelated, but a total score did not show high overall internal consistency (α 0.55 to 0.61). A factor analysis-based model suggested three distinct quality domains. Allocation concealment was consistently associated with slightly smaller treatment effect estimates across all three datasets; other individual criteria results varied. In dataset 1, the 11 individual criteria were consistently associated with lower estimated effect sizes. Dataset 2 showed some unexpected results; for several dimensions, studies meeting quality criteria reported larger effect sizes. Dataset 3 showed some variation across criteria. There was no statistically significant linear association of a summary scale or factor scores with effect sizes. Applying a cutoff of 5 or 6 criteria met (out of 11) differentiated high and low quality studies best. The effect size differences for a cutoff at 5 was -0.20 (95% confidence interval [CI]: -0.34, -0.06) in dataset 1 and the respective ratio of odds ratios in dataset #3 was 0.79 (95% CI: 0.63, 0.95). Associations indicated that low-quality trials tended to overestimate treatment effects. This observation could not be replicated with dataset 2, suggesting the influence of confounders and moderators. The size of the treatment effect, the condition being treated, the type of outcome, and the variance in effect sizes did not sufficiently explain the differential associations between quality and effect sizes but warrant further exploration in explaining variation between datasets.
Conclusions:
Effect sizes of individual studies depend on many factors. The conditions where quality features lead to biased effect sizes warrant further exploration.
Contents
- Preface
- Acknowledgements
- Executive Summary
- Background
- Methods
- Results
- Discussion
- References
- Appendixes
- Appendix A References Dataset 1: Back Pain, 216 Trials
- Appendix B References Dataset 2: EPC Reports, 165 Trials
- Appendix C References Dataset 3: Published “Pro-bias” Dataset, 100 Trials
- Appendix D Comparison Fixed-Effects Model Results
- Appendix E Comparison Random Effects Meta-regression Results
- Appendix F Quality Rating Form
Prepared for: Agency for Healthcare Research and Quality, U.S. Department of Health and Human Services1, Contract No. HHSA 290-2007-10062-I. Prepared by: Southern California Evidence-based Practice Center, Santa Monica, CA
Suggested citation:
Hempel S, Suttorp MJ, Miles JNV, Wang Z, Maglione M, Morton S, Johnsen B, Valentine D, Shekelle PG. Empirical Evidence of Associations Between Trial Quality and Effect Sizes. Methods Research Report (Prepared by the Southern California Evidence-based Practice Center under Contract No. 290-2007-10062-I). AHRQ Publication No. 11-EHC045-EF. Rockville, MD: Agency for Healthcare Research and Quality. June 2011. Available at: http://effectivehealthcare.ahrq.gov.
This report is based on research conducted by the Southern California Evidence-based Practice Center (EPC) under contract to the Agency for Healthcare Research and Quality (AHRQ), Rockville, MD (Contract No. 290-2007-10062-I). The findings and conclusions in this document are those of the author(s), who are responsible for its contents; the findings and conclusions do not necessarily represent the views of AHRQ. Therefore, no statement in this report should be construed as an official position of AHRQ or of the U.S. Department of Health and Human Services.
The information in this report is intended to help health care decisionmakers—patients and clinicians, health system leaders, and policymakers, among others—make well-informed decisions and thereby improve the quality of health care services. This report is not intended to be a substitute for the application of clinical judgment. Anyone who makes decisions concerning the provision of clinical care should consider this report in the same way as any medical reference and in conjunction with all other pertinent information, i.e., in the context of available resources and circumstances presented by individual patients.
This report may be used, in whole or in part, as the basis for development of clinical practice guidelines and other quality enhancement tools, or as a basis for reimbursement and coverage policies. AHRQ or U.S. Department of Health and Human Services endorsement of such derivative products may not be stated or implied.
No investigators have any affiliations or financial involvement (e.g., employment, consultancies, honoraria, stock options, expert testimony, grants or patents received or pending, or royalties) that conflict with material presented in this report.
- 1
540 Gaither Road, Rockville, MD 20850; http://www
.ahrq.gov
- NLM CatalogRelated NLM Catalog Entries
- Review Detection of Associations Between Trial Quality and Effect Sizes[ 2012]Review Detection of Associations Between Trial Quality and Effect SizesHempel S, Miles J, Suttorp MJ, Wang Z, Johnsen B, Morton S, Perry T, Valentine D, Shekelle PG. 2012 Jan
- Review The Empirical Evidence of Bias in Trials Measuring Treatment Differences[ 2014]Review The Empirical Evidence of Bias in Trials Measuring Treatment DifferencesBerkman ND, Santaguida PL, Viswanathan M, Morton SC. 2014 Sep
- Review Effect of methodological quality measures in spinal surgery research: a metaepidemiological study.[Spine J. 2012]Review Effect of methodological quality measures in spinal surgery research: a metaepidemiological study.Jacobs WC, Kruyt MC, Verbout AJ, Oner FC. Spine J. 2012 Apr; 12(4):339-48. Epub 2012 Mar 3.
- A meta-epidemiological study to examine the association between bias and treatment effects in neonatal trials.[Evid Based Child Health. 2014]A meta-epidemiological study to examine the association between bias and treatment effects in neonatal trials.Bialy L, Vandermeer B, Lacaze-Masmonteil T, Dryden DM, Hartling L. Evid Based Child Health. 2014 Dec; 9(4):1052-9.
- Review Aspirin Use in Adults: Cancer, All-Cause Mortality, and Harms: A Systematic Evidence Review for the U.S. Preventive Services Task Force[ 2015]Review Aspirin Use in Adults: Cancer, All-Cause Mortality, and Harms: A Systematic Evidence Review for the U.S. Preventive Services Task ForceWhitlock EP, Williams SB, Burda BU, Feightner A, Beil T. 2015 Sep
- Empirical Evidence of Associations Between Trial Quality and Effect SizeEmpirical Evidence of Associations Between Trial Quality and Effect Size
- metastasis-associated protein MTA3 isoform X1 [Bombus terrestris]metastasis-associated protein MTA3 isoform X1 [Bombus terrestris]gi|2245584937|ref|XP_048262035.1|Protein
Your browsing activity is empty.
Activity recording is turned off.
See more...