U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Henriksen K, Battles JB, Keyes MA, et al., editors. Advances in Patient Safety: New Directions and Alternative Approaches (Vol. 2: Culture and Redesign). Rockville (MD): Agency for Healthcare Research and Quality (US); 2008 Aug.

Cover of Advances in Patient Safety: New Directions and Alternative Approaches (Vol. 2: Culture and Redesign)

Advances in Patient Safety: New Directions and Alternative Approaches (Vol. 2: Culture and Redesign).

Show details

Is There an Association Between Patient Safety Indicators and Hospital Teaching Status?

, PhD, , PhD, , MPH, , PhD, and , PhD.

Author Information and Affiliations

Objective: We compared discharges from teaching and nonteaching hospitals for relative rates and likelihood of potentially preventable adverse events. Methods: We applied Agency for Healthcare Research and Quality (AHRQ) Patient Safety Indicators (PSIs) to adult male patient discharges from Veterans Health Administration (VA) and non-Federal hospitals, calculated risk-adjusted PSI rates, and compared the likelihood of incurring a PSI event, controlling for case-mix and hospital characteristics. Results: PSI rates were higher in major teaching hospitals than in nonteaching hospitals for iatrogenic pneumothorax and selected infections due to medical care in both VA and non-Federal hospitals and for postoperative pulmonary embolism or deep-vein thrombosis in non-Federal hospitals. In non-Federal hospitals, likelihood of a PSI event was higher in major teaching hospitals for decubitus ulcer and postoperative wound dehiscence in addition to those PSIs with higher stratified rates. Conclusion: Further research is needed on the relationship of residency programs to adverse events. Differences between VA and non-Federal hospitals suggest that if residency programs increase risk to patients, the causes may be actionable at the organizational level.

Introduction

Evidence suggests that quality of care is generally higher in teaching hospitals than in nonteaching hospitals.1, 2 However, the evidence is less clear on whether hospital teaching status affects patient safety—that is, studies of rates of potentially preventable adverse events report inconsistent findings in comparisons among teaching and nonteaching hospitals.3, 4, 5 In the study described here, the Agency for Healthcare Research and Quality (AHRQ) Patient Safety Indicators (PSIs) were used to compare rates of potentially preventable adverse events in teaching and nonteaching hospitals.

In an effort to isolate the effect of teaching hospital status on PSI rates, we conducted parallel analyses of discharge data from Federal (Veterans Health Administration, VA) and non-Federal (AHRQ Nationwide Inpatient Sample) hospitals; in both analyses, we controlled for other hospital characteristics and for patient-level risk factors.

The AHRQ PSIs are based on the Institute of Medicine (IOM) definition of an adverse event: “Injury resulting from a medical intervention … not due to the underlying condition of the patient.”6 The PSIs have been developed with the goal of distinguishing injuries that often can be prevented from those that cannot, due to patient characteristics or condition and/or the riskiness of the treatment. For example, older and sicker patients are often at higher risk for incurring adverse events.4, 7, 8 Consistent with Donabedian’s model of patient care quality,9 preventable adverse events are often the result of process failures, which may in turn be related to structural characteristics, such as teaching status,3, 4, 6, 10 bed size,8 nurse staffing levels,11, 12 procedure volume,13, 14, 15 and other organizational characteristics of hospitals. Therefore, to examine the relationship between teaching status and PSI rates, it is important to account for both patient characteristics and structural characteristics that can affect processes and outcomes of care (Figure 1). Controlling for patient and facility characteristics other than teaching status, we used logistic regression analysis to estimate whether a hospitalization in a teaching hospital was more likely to incur a PSI event than a hospitalization in a nonteaching hospital.

Figure 1. Conceptual framework: Predictors of adverse events.

Figure 1

Conceptual framework: Predictors of adverse events.

Studies of quality of care, in contrast to those studies focused on safety only, suggest that quality is higher in teaching hospitals than in nonteaching hospitals.1, 2 This is true for both outcome measures, such as risk-adjusted mortality,16, 17, 18 and process measures.19 The pattern is not universal; for example, one study found higher 30-day risk-adjusted surgical morbidity and mortality in VA teaching hospitals;20 another found higher rates of complications in non-Federal teaching hospitals, despite generally lower surgical mortality rates.21

Evidence on the relationship between hospital teaching status and risk of incurring a potentially preventable adverse event is much less consistent. The Harvard Medical Practice Study, using chart-abstracted data, found lower adverse event rates in teaching hospitals22 or no differences,23 but subsequent studies, generally using administrative data, have found either no difference or higher rates in teaching hospitals. Defining “teaching hospital” as membership in the Association of American Medical Colleges (AAMC) Council of Teaching Hospitals (COTH), Iezzoni and colleagues found higher rates of complications overall in teaching hospitals. However, using accreditation by the Association of Colleges of Graduate Medical Education (ACGME) as the definition of teaching hospital, they found the opposite.8 Romano and colleagues4 found the highest risk-adjusted rates of most categories of potentially preventable adverse events but lower rates of postoperative hip fracture at urban teaching hospitals. Other studies using the COTH-membership definition have found inconsistent relationships between teaching status and potentially preventable adverse events.3, 5

There is some correspondence of finding to method: when administrative data rather than chart-abstracted data are used, and when a narrower definition of teaching hospital (COTH membership) rather than a broader definition (ACGME accreditation or presence of residents) or a ratio of residents to beds is used, teaching hospitals appear to have higher adverse event rates. Therefore, the relationship between teaching status and patient safety bears further investigation, both because findings to date have not been consistent with research on the quality of care in teaching hospitals and because of the possibility of methodologic bias.

It is also important to learn whether teaching hospital characteristics—such as their relative size and volume, the complexity of care delivered in them, or more actionable aspects, such as coordination of care issues related to resident physicians—are at the root of apparent differences in adverse event rates. While the present study did not test all these specific potential explanations, it did control for size, it does include additional controls for patient demographics, and it does shed light on whether there seems to be a consistent trend across the two types of settings.

Two prior studies using administrative data found higher rates of selected infections due to medical care in teaching hospitals4, 5; one of those two studies and a third study found higher rates of postoperative pulmonary embolism or deep vein thrombosis (PE/DVT) in teaching hospitals.3, 4 Based on these studies, we hypothesized that PSI events would be more likely to occur during hospitalizations at teaching facilities than at nonteaching facilities for these two PSIs. While a number of studies have also compared failure to rescue rates in teaching and nonteaching hospitals, the findings have been inconsistent across classes of patients21, 24 and across studies,4, 5 as have studies of rates of other PSIs in teaching and nonteaching facilities. Therefore, we hypothesized that PSI events would be neither more nor less likely in hospitalizations at teaching facilities than in those at nonteaching facilities for failure to rescue and for the remaining 11 PSIs in our study.

Our study first compared risk-adjusted PSI rates among major teaching, minor teaching, and nonteaching hospital strata, separately in the VA health care system and the private sector. We then tested our hypotheses concerning a patient’s likelihood of experiencing a PSI event in major and minor teaching hospitals compared with nonteaching hospitals after controlling for patient-level and hospital-level characteristics, including nurse staffing levels and operating room procedure volume.

While previous studies have reported PSI rates in teaching and nonteaching hospitals, our study is distinctive in several ways. We tested the relationship between teaching status and 14 nonobstetric PSIs, and we performed multiple statistical analyses, whereas prior studies were limited to a few PSIs3, 5 or did not present statistical analyses.4 Our study incorporated structural variables and compared the effects of teaching hospital status in VA and non-Federal hospitals under consistent and carefully controlled conditions. Given that VA and non-Federal patient populations differ considerably, both in demographics and health status,25, 26 this afforded a new opportunity to see if any relationships between teaching status and PSIs were consistent across differing groups of hospitals. While previous studies of teaching hospital effects on PSIs have relied on AHRQ Healthcare Cost and Utilization Project (HCUP) Nationwide Inpatient Sample (NIS) data only, working with both VA and non-Federal discharge data afforded us an opportunity to increase the generalizability of our findings by applying comparable analytic methods to data from two very different health care delivery systems.

Methods

Data

The source of data for VA hospitalizations was the Patient Treatment File (PTF), an administrative database containing records of inpatient care delivered at VA facilities. Because each discharge record comprised four separate files and covered both acute and nonacute (e.g., skilled nursing) inpatient days, we built unified discharge records, with nonacute care excluded, for use with the AHRQ PSIs. The PTF and the methodology for creating acute-only records have been described elsewhere.27, 28 We included all VA acute inpatient care from hospitalizations with discharges in fiscal year 2004 (10/1/03 to 9/30/04), with certain exclusions described below. The source of data for non-Federal hospitalizations was the calendar year 2003 HCUP NIS, a stratified sample of all-payer acute inpatient care at non-Federal hospitals in 37 States.29 The NIS sampling frame covered approximately 90 percent of all U.S. hospital discharges. We estimated PSI rates for the U.S. population by applying HCUP-supplied weights, based on the NIS sampling frame, to the NIS data.30

The VA and non-VA patient populations differed in various ways, including age and sex composition (e.g., in the VA, no patients were under age 18, and 95 percent of patients were male)25 and mental health status. Therefore, in order to make the VA and HCUP databases as comparable as possible, the discharges in this study were limited to adult male patients. The methodology for creating acute-only VA discharge records eliminated most pure psychiatric and substance abuse admissions from the VA data, which were assigned to Diagnosis Related Group (DRG) Major Diagnostic Categories (MDCs) 19 (mental disorders) and 20 (alcohol/drug use disorders). We, therefore, excluded all discharges in MDCs 19 and 20 from the NIS and VA datasets, except for DRG 424, a surgical DRG within the mental disorders MDC. We calculated the surgery volume for each facility based on counts of valid operating room procedures (as defined by DRG algorithms) in the VA and NIS discharge data.

We used secondary data sources to obtain information for both VA and NIS hospital structural characteristics. Data from the 2003 American Hospital Association (AHA) Annual Survey Database provided information on hospital teaching status, number of beds and nursing hours (estimated from Registered Nurse and Licensed Practical Nurse Full-Time Equivalents) per adjusted patient day.15, 31 COTH-member hospitals were categorized as major teaching hospitals; hospitals with ACGME accreditation only and without COTH membership were categorized as minor teaching hospitals; hospitals with neither major nor minor teaching status were categorized as nonteaching. Three bed-size strata were created: large (≥325 staffed beds), medium (200–324 beds) and small (<200 beds). Because the VA sample was much smaller than the NIS, bed size stratum cut-points were driven by the need for an adequate count of VA facilities in each stratum. Information on Core-Based Statistical Areas (CBSA) from 2001 U.S. Census Bureau data provided an indicator of metropolitan or nonmetropolitan location of hospitals.4, 32 Metropolitan areas were urban, with 50,000+ population; micropolitan areas had a concentrated population of 10,000 to 50,000; all others were non-CBSA. Because very few VA facilities were non-CBSA, we categorized VA and NIS facilities as metropolitan and nonmetropolitan. We linked facility-level AHA and CBSA data and surgical volume to the VA and NIS patient-level discharge records via facility identifiers.

Patient Safety Indicators

We used the AHRQ PSIs to estimate rates of potentially preventable adverse events in NIS and VA data. The PSIs, tools for assessing patient safety using administrative data, are evidence-based indicators of potentially preventable adverse events. As described elsewhere,4, 33, 34 the PSIs were developed by the UC-Davis-Stanford Evidence-based Practice Center under sponsorship from AHRQ. They were developed to maximize the likelihood that flagged events are preventable and to minimize false positives at the potential expense of some false negatives.4, 35 The PSIs have good face validity, and studies suggest that a number of the PSIs have good construct validity,4, 36, 37, 38 although recent analyses also suggest that several PSIs (decubitus ulcer, postoperative hip fracture, and postoperative PE/DVT) identified events that were present at the time of admission.17, 39, 40 For our analysis, we selected 14 of the 16 nonobstetric hospital-level AHRQ PSIs. We did not include complications of anesthesia or transfusion reaction because their low occurrence rates did not support meaningful comparison.

Data Adaptation

The VA data required modification in order to apply the AHRQ PSI software. The PSIs selected discharges based on data elements in the UB-92 (1992 Uniform Bill) hospital claim, some of which were absent in the VA discharge record. Therefore, we used algorithms, described in detail elsewhere,28 to calculate variables, including principal procedure and admission type. For example, some PSI definitions excluded elective hospitalizations from the denominator and excluded cases based on length of stay. We used DRG and admission day to distinguish between elective and nonelective admissions. In addition, VA and NIS data were both modified to ensure that key data elements were as comparable as possible. For example, to minimize differences in methods for determining admission type or calculating length of stay that could affect PSI rates, we applied the VA algorithm for calculating admission type to both VA and NIS data, and we used the same length-of-stay definition for both NIS and VA hospitalizations (discharge date minus admit date, with a minimum of one.)

Analyses

All analyses were conducted separately on the VA and NIS database. First, risk-adjusted PSI rates were calculated by applying AHRQ’s PSI software (Ver. 2.1, Rev. 3a)41 and the Statistical Analysis System (SAS®), Ver. 9.1, to the VA and NIS databases. Observed (raw) PSI rates (not reported here) were the number of hospitalizations flagged by the software with potential adverse events, divided by the number of hospitalizations at risk. The AHRQ software then computed risk-adjusted PSI rates using software-supplied parameter estimates from a logistic regression model that was developed by AHRQ on discharge data from all reporting hospitals in the 2002 HCUP State Inpatient Databases (SID) and included patient-level predictors of PSI events: age, sex, age-sex interactions, aggregated DRGs, and 27 comorbidities (modifications of the Elixhauser comorbidity index).42 Thus, the risk-adjusted rates reflected the sampled hospitals’ estimated PSI rates if they had the “average” case mix among all hospitals in the HCUP estimation sample. We generated overall risk-adjusted PSI rates for VA and NIS hospitals and then for VA and NIS hospitals stratified into major, minor, and nonteaching categories. We applied NIS sampling weights to the NIS data in calculating risk-adjusted rates, so that the rates would represent a national estimate. To determine whether rates differed across teaching hospital categories, we calculated 99 percent confidence intervals (CIs).

To compare the likelihood of a patient experiencing a PSI event in teaching and nonteaching hospitals, we created separate VA and NIS logistic regression models. The models were created at the discharge level of analysis, consistent with the conceptual framework shown in Figure 1. The models incorporated categorical variables for bed size and continuous variables for nurse staffing and operating room volume. The natural logarithm of operating room volume was used because the variable was positively skewed. We did not include geographic location in the model due to its high correlation with teaching status in the NIS sample and the small number of nonmetropolitan VA hospitals.

Each fixed-effects model also included a discharge-level (patient-specific) risk factor, calculated using all variables and weights from the AHRQ risk-adjustment software41 and used an offset. We used SAS Proc Survey Logistic to provide a more robust standard error and to account for NIS sampling weights and for hospital-level cluster effects in both NIS and VA data. Eleven NIS facilities (one major teaching, two minor teaching, eight nonteaching) were missing nurse staffing data; for these, we substituted the NIS mean value. One NIS small metropolitan nonteaching facility was excluded due to its high outlier nurse staffing value (2,934 nursing hours per patient day). We calculated the relative odds of experiencing a PSI, comparing major teaching to nonteaching and minor teaching to nonteaching hospitals. To determine whether the predictors of experiencing a PSI differed across teaching hospital categories, we calculated 99 percent confidence intervals for the estimated relative odds.

Results

VA and NIS facilities and discharges are described in Table 1. The VA data reported here are from 427,718 hospitalizations at 116 acute care VA hospitals in FY2004. This represents all but one of the 117 VA facilities providing acute inpatient care during fiscal year 2004; one hospital was excluded due to missing hospital-level data. The NIS data are from 2,381,353 unweighted hospitalizations at 992 non-Federal hospitals in 2003.

Table 1. Characteristics of VA and NIS samples: Hospitals and discharges.

Table 1

Characteristics of VA and NIS samples: Hospitals and discharges.

Teaching hospitals comprised a majority (75 percent) of the VA hospitals and a minority (17 percent) of the NIS hospitals; 52 percent of VA hospitals and only 5 percent of NIS hospitals were major teaching facilities. The majority of major and minor teaching hospitals were in metropolitan locations in both VA and NIS. However, while nonmetropolitan facilities comprised only 8 percent of VA nonteaching hospitals, they comprised 40 percent of NIS nonteaching hospitals. Compared with the VA, a greater proportion of NIS hospitals were small. However, in both VA and NIS, more of the major teaching hospitals were large in bedsize, rather than medium or small. In the VA, more of the minor teaching hospitals were small, but in the NIS, more of the minor teaching hospitals were large. Operating room volume and teaching status were associated in both VA and NIS: major teaching hospitals had the highest volume and nonteaching hospitals the lowest.

The relationship of nurse staffing to teaching status differed between the VA and the NIS: nurse staffing was slightly higher in VA major and minor teaching hospitals than in VA nonteaching hospitals, whereas it was lower in major and minor NIS teaching hospitals than in NIS nonteaching hospitals. Nurse staffing was higher overall in the VA. Average length of stay (LOS) was longest in major teaching hospitals and shortest in nonteaching hospitals in both VA and NIS; in all but major teaching hospitals, LOS was slightly longer in VA facilities. Overall, VA patients were older than patients at NIS facilities. Mean patient age was lowest in major teaching hospitals in both VA and NIS. However, age differences across teaching hospital categories were very small in the VA and larger in the NIS.

Risk-adjusted PSI rates are shown in Table 2. In both the VA and the NIS, major teaching hospitals had higher risk-adjusted rates of iatrogenic pneumothorax and selected infections due to medical care than nonteaching hospitals. The greatest differences between major and nonteaching hospitals were for selected infections due to medical care: for VA and NIS, respectively, major teaching hospital rates were 2.6 and 3.8 per thousand and nonteaching were 1.0 and 1.7, respectively. NIS major teaching hospitals also had higher rates of postoperative PE/DVT than nonteaching hospitals. Differences in risk-adjusted rates between minor teaching hospitals and nonteaching hospitals were not significant in either the NIS or the VA.

Table 2. Risk-adjusted PSI rates (99% CI) by hospital teaching status: VA fiscal year 2004 and NIS calendar year 2003, .

Table 2

Risk-adjusted PSI rates (99% CI) by hospital teaching status: VA fiscal year 2004 and NIS calendar year 2003, .

Regression analysis results are shown in Table 3. There were statistically significant differences between major teaching and nonteaching hospitals for five PSIs in the NIS and for none in the VA; there were no significant differences for minor teaching hospitals in either the VA or the NIS. In the NIS, a PSI was significantly more likely in a major teaching hospital than in a nonteaching hospital for decubitus ulcer (42 percent), iatrogenic pneumothorax (45 percent), selected infections due to medical care (37 percent), postoperative PE/DVT (70 percent), and postoperative wound dehiscence (58 percent).

Table 3. Odds (99% CI) of incurring a PSI in major and minor teaching hospitals relative to nonteaching hospitals, .

Table 3

Odds (99% CI) of incurring a PSI in major and minor teaching hospitals relative to nonteaching hospitals, .

The results were consistent with our hypotheses that selected infections due to medical care and postoperative PE/DVT would be more likely, but for major teaching hospitals only. The results were inconsistent with our hypotheses that PSIs would be neither more nor less likely in teaching hospitals for decubitus ulcer, iatrogenic pneumothorax, and postoperative wound dehiscence.

Discussion

Comparison of teaching hospital and nonteaching hospital rates for 14 PSIs using data from two sets of hospitals with different structural characteristics and patient populations yielded moderately consistent results. Risk-adjusted PSI rates were either not different or higher in major teaching facilities compared to nonteaching facilities in both the VA and the NIS. Our findings of higher rates of selected infections due to medical care and postoperative PE/DVT in non-Federal major teaching hospitals are similar to prior studies.

Findings from our regression analyses suggest that, after accounting for other hospital characteristics and for patient risk factors, a hospitalization at a teaching facility may have similar or greater likelihood of a PSI event in comparison with hospitalization at a nonteaching facility. With a few exceptions, findings from regression analyses were consistent with the findings based on bivariate comparisons of risk-adjusted PSI rates. The inclusion in our models of variables representing hospital structural characteristics—bed size, operating room procedure volume, and nurse staffing—did not appear to have a substantial effect on the comparison between teaching and nonteaching facilities.

In a comparison of risk-adjusted rates and the regression analyses, the VA had two significant rate differences between major and nonteaching facilities and no significant differences in the regression. The NIS had three significant rate differences between major and nonteaching facilities and five significant differences (including the three PSIs with significant rates differences) in the regression.

PSI events were more likely in major teaching hospitals only in the NIS, for three medical-surgical PSIs (decubitus ulcer, iatrogenic pneumothorax, and selected infections due to medical care) and two postoperative PSIs (postoperative PE/DVT and postoperative wound dehiscence). We found no commonality among these five PSIs that suggests a single explanation. For example, some indicators, such as pneumothorax and wound dehiscence, are procedure-related and potentially more sensitive to residents’ involvement in care. Others, such as decubitus ulcers, are likely more sensitive to nurse staffing and care. The five indicators cover a range from those more attributable to system weaknesses to those more attributable to individual technical error. Some indicators, such as decubitus ulcers and postoperative PE/DVT, are more sensitive to limitations of administrative data, such as lack of a “Present on Admission” data element, while others are less so.17, 39, 40

While the stratified risk-adjusted rates showed no patterns across PSIs that distinguished VA from NIS on the apparent effects of teaching hospitals on PSI rates, in the logistic regression, which controls for other structural characteristics, major teaching hospital status increased the odds of incurring a PSI in the NIS for five PSIs but did not have a significant effect in the VA. Further research is needed to learn whether these differences between VA and non-Federal hospitals are associated with actionable differences, such as use of safety protocols that are tailored to the involvement of residents in patient care or in orientation, training, and supervision of residents.

Limitations

Despite the use of two large datasets for our study, the data and methods are characterized by certain limitations. For some PSIs, events are so infrequent on average in the VA that 1 year’s data may not be adequate to detect “true” underlying event rates. The fact that the regression analysis yielded statistical significance for several PSIs in the NIS and for no PSIs in the VA may be indicative of limitations in statistical power. In general, factors such as limited clinical information and variation in coding limit the potential for adverse event detection using administrative data in comparison with chart-abstracted data.38, 43 Finally, our attempt to make the NIS and VA datasets more comparable by limiting our analysis to discharges of males aged 18 and older may limit the generalizability of our conclusions to the broader population.

Findings from our regression analyses in particular must be interpreted with caution. None of the hospital structural variables was a significant predictor of PSI events in our models for more than half of the PSIs in either the VA or the NIS, and the direction of a given variable’s effect differed somewhat across PSIs and between VA and NIS models (results not shown). This was particularly surprising in the case of the nurse staffing variable, given evidence in the literature of a relationship between nurse staffing and both quality and safety. However, that evidence also suggests that the key nurse staffing predictor of patient safety may be RN-only staffing or nursing skill mix rather than total RN plus LPN staffing.11, 12, 15 There is also evidence that the AHA nurse staffing data used in our study may overstate staffing in small, rural and nonteaching hospitals.31 The hospital-level bed size and procedure volume variables we used may be too general to be valid predictors of adverse event rates: studies of the relationship between procedure volume and quality and safety suggest that the meaningful relationships are at the levels of specific providers and types of procedures.44

Implications for Research

This study adds to the literature on the relationship between a hospital’s teaching status and patient safety. We found significantly higher likelihood of a patient safety event occurring in major teaching hospitals relative to nonteaching hospitals for five PSIs in a nationwide sample of discharges from 992 non-Federal hospitals. Findings were not statistically significant in one year’s discharges from 116 VA hospitals. Although it is possible that quality of care may be more homogeneous across VA facilities, the lack of significant differences between teaching and nonteaching facilities in the VA may be attributable to low statistical power, which would have constrained our ability to detect any differences.

It will be important to extend our exploration of whether higher PSI rates in teaching hospitals are the direct result of the presence of residency training programs, or if they are the result of an interaction between teaching status and other aspects of hospital structure. In addition, if higher risk-adjusted adverse event rates are, in fact, associated with the presence of residency training programs, then it would be important to discern whether differences in documentation and coding or differences in structures or processes of care account for the higher rates. Recent studies of the impact of changes in resident teaching hours45, 46, 47 are examples. The fact that the major teaching hospital effects on PSIs differed between the NIS and the VA suggests that if residency programs increase risk to patients, the causes may well be actionable at the organizational level. Finally, further research is needed to assess the adequacy of risk adjustment in the comparison between teaching and nonteaching hospitals.

The potential policy implications of findings such as ours also underscore the need for studies testing the criterion validity of the PSIs. In addition, given the potential limitations of administrative data in detecting adverse events, it is also important for future research to use other sources, such as chart-abstracted data, to address questions similar to those addressed here.

Conclusion

This study was among the first to compare teaching and nonteaching hospitals using a regression model that incorporates hospital structural characteristics as controls and to report comparable analyses on data from VA and non-Federal hospitals. Our conclusion that PSI events may be as likely or more likely in teaching hospitals compared with nonteaching hospitals has important implications both for the further study and development of the PSIs and other tools for assessing patient safety using administrative data and for the understanding of structural factors affecting patient safety.

Acknowledgments and Disclaimers

This study was supported by grant number IIR-02-144-1, the Department of Veterans Affairs, Veterans Health Administration, Health Services Research and Development (HSR&D) Service, and the Agency for Healthcare Research and Quality, Center for Delivery, Organization and Markets. The authors thank Patrick S. Romano, MD, MPH, for his helpful comments on an early draft of this paper.

This paper does not represent the policies or the positions of the Department of Veteran Affairs. No official endorsement by this organization is intended or should be inferred.

References

1.
Ayanian JZ, Weissman JS. Teaching hospitals and quality of care: A review of the literature. Milbank Q. 2002;80:569–593. [PMC free article: PMC2690120] [PubMed: 12233250]
2.
Kupersmith J. Quality of care in teaching hospitals: A literature review. Acad Med. 2005;80:458–466. [PubMed: 15851459]
3.
Duggirala AV, Chen FM, Gergen PJ. Postoperative adverse events in teaching and nonteaching hospitals. Fam Med. 2004;36:508–513. [PubMed: 15243833]
4.
Romano PS, Geppert JJ, Davies S, et al. A national profile of patient safety in U.S. hospitals. Health Aff. 2003;22:154–66. [PubMed: 12674418]
5.
Thornlow DK, Stukenborg GJ. The association between hospital characteristics and rates of preventable complications and adverse events. Med Care. 2006;44:265–269. [PubMed: 16501398]
6.
Kohn LT, Corrigan JM, Donaldson MS, editors. To err is human: Building a safer health system. Washington, DC: National Academies Press; 2000. [PubMed: 25077248]
7.
Bates DW, Miller EB, Cullen DJ, et al. Patient risk factors for adverse drug events in hospitalized patients. Arch Intern Med. 1999;159:2553–2560. [PubMed: 10573045]
8.
Iezzoni L, Daley J, Heeren T, et al. Identifying complications of care using administrative data. Med Care. 1994;32:700–714. [PubMed: 8028405]
9.
Donabedian A. The definition of quality and approaches to its assessment. Vol. 1. Ann Arbor, MI: Health Administration Press; 1980. Explorations in quality assessment and monitoring.
10.
Hutter MM, Glasgow RE, Mulvhill SJ. Does the participation of a surgical trainee adversely impact patient outcomes? A study of major pancreatic resections in California. Surgery. 2000;128:286–292. [PubMed: 10923006]
11.
Needleman J, Buerhaus P, Mattke S, et al. Nurse-staffing levels and the quality of care in hospitals. N Engl J Med. 2002;346:1715–1722. [PubMed: 12037152]
12.
Person SD, Allison JJ, Kiefe CI, et al. Nurse staffing and mortality for Medicare patients with acute myocardial infarction. Med Care. 2004;42:4–12. [PubMed: 14713734]
13.
Birkmeyer JD, Siewers AE, Finlayson EV, et al. Hospital volume and surgical mortality in the United States. N Engl J Med. 2002;346:1128–1137. [PubMed: 11948273]
14.
Jha AK, Perlin JB, Kizer KW, et al. Effect of the transformation of the Veterans Affairs health care system on the quality of care. N Engl J Med. 2003;348:2218–2227. [PubMed: 12773650]
15.
Elixhauser A, Steiner C, Fraser I. Volume thresholds and hospital characteristics in the United States. Health Aff. 2003;22:167–177. [PubMed: 12674419]
16.
Allison JJ, Kiefe CI, Weissman NW, et al. Relationship of hospital teaching status with quality of care and mortality for Medicare patients with acute MI. JAMA. 2000;284:1256–1262. [PubMed: 10979112]
17.
Polancich S, Restrepo E, Prosser J. Cautious use of administrative data for decubitus ulcer outcome reporting. Am J Med Qual. 2006;21:262–268. [PubMed: 16849783]
18.
Taylor DH Jr, Whellan DJ, Sloan FA. Effects of admission to a teaching hospital on the cost and quality of care for Medicare beneficiaries. N Engl J Med. 1999;340:293–299. [PubMed: 9920955]
19.
Keeler EB, Rubenstein LV, Kahn KL, et al. Hospital characteristics and quality of care. JAMA. 1992;268:1709–1714. [PubMed: 1527880]
20.
Khuri SF, Najjar SF, Daley J, et al. Comparison of surgical outcomes between teaching and nonteaching hospitals in the Department of Veterans Affairs. Ann Surg. 2001;234:370–382. 82–83. [PMC free article: PMC1422028] [PubMed: 11524590]
21.
Silber JH, Rosenbaum PR, Schwartz JS, et al. Evaluation of the complication rate as a measure of quality of care in coronary artery bypass graft surgery. JAMA. 1995;274:317–323. [PubMed: 7609261]
22.
Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients: Results of the Harvard Medical Practice Study I. N Engl J Med. 1991;324:370–376. [PubMed: 1987460]
23.
Thomas EJ, Orav EJ, Brennan TA. Hospital ownership and preventable adverse events. J Gen Intern Med. 2000;15:211–219. [PMC free article: PMC1495442] [PubMed: 10759995]
24.
Silber JH, Williams SV, Krakauer H, et al. Hospital and patient characteristics associated with death after surgery. A study of adverse occurrence and failure to rescue. Med Care. 1992;30:615–629. [PubMed: 1614231]
25.
Petersen LA, Normand SL, Daley J, et al. Outcome of myocardial infarction in Veterans Health Administration patients as compared with Medicare patients. N Engl J Med. 2000;343:1934–1941. [PubMed: 11136265]
26.
Yu W, Ravelo A, Wagner TH, et al. Prevalence and costs of chronic conditions in the VA health care system. Med Care Res Rev. 2003;60:146S–167S. [PubMed: 15095551]
27.
Wagner TH, Chen S, Barnett PG. Using average cost methods to estimate encounter-level costs for medical-surgical stays in the VA. Med Care Res Rev. 2003;60:15S–36S. [PubMed: 15095543]
28.
Rivard P, Elwy AR, Loveland S, et al. Advances in patient safety: From research to implementation. Rockville, MD: Agency for Healthcare Research and Quality and Department of Defense; 2005. [Accessed April 28, 2008]. Applying patient safety indicators (PSIs) across healthcare systems: Achieving data comparability. Concepts and methodology. AHRQ Pub. 05-0021-2. Available at: www​.ahrq.gov/downloads​/pub/advances/vol2/Rivard.pdf. [PubMed: 21249839]
29.
Healthcare Cost and Utilization Project (HCUP). HCUP Nationwide Inpatient Sample (NIS). Rockville, MD: Agency for Healthcare research and Quality; 2007. [Accessed April 19, 2008]. Available at: www​.hcup-us.ahrq.gov/nisoverview.jsp. [PubMed: 21413206]
30.
Whalen D, Houchens R, Elixhauser R. HCUP Nationwide Inpatient Sample (NIS) comparison report, HCUP methods series report # 2008-01. 2005. [Accessed April 19, 2008]. Available at: www​.hcup-us.ahrq.gov/reports/methods.jsp.
31.
Jiang HJ, Stocks C, Wong CJ. Disparities between two common data sources on hospital nurse staffing. J Nurs Scholarsh. 2006;38:187–193. [PubMed: 16773924]
32.
Loux S, Payne S, Knott A. Advances in patient safety: From research to implementation. Rockville, MD: Agency for Healthcare Research and Quality; 2005. [Accessed April 28, 2008]. Comparing patient safety in rural hospitals by bed count. Research findings. AHRQ Pub. 05-0021-1. Available at: www​.ahrq.gov/downloads​/pub/advances/vol1/Loux.pdf. [PubMed: 21249783]
33.
AHRQ quality indicators Patient safety indicators download. Rockville, MD: Agency for Healthcare Research and Quality; 2007. [Accessed April 19, 2008]. Available at: www​.qualityindicators​.ahrq.gov/psi_download.htm.
34.
Zhan C, Miller MR. Excess length of stay, charges, and mortality attributable to medical injuries during hospitalization. JAMA. 2003;290:1868–1874. [PubMed: 14532315]
35.
AHRQ quality indicators Guide to patient safety indicators download, Ver 2.1, Rev 3. Rockville, MD: Agency for Healthcare research and Quality; 2007. [Accessed April 19, 2008]. Available at: www​.qualityindicators​.ahrq.gov/psi_download.htm.
36.
Rosen AK, Rivard P, Zhao S, et al. Evaluating the patient safety indicators: How well do they perform on VA data? Med Care. 2005;43:873–884. [PubMed: 16116352]
37.
Rosen AK, Zhao S, Rivard P, et al. Tracking rates of patient safety indicators over time: Lessons from the Veterans Administration. Med Care. 2006;44:850–861. [PubMed: 16932137]
38.
Zhan C, Miller MR. Administrative data based patient safety research: A critical review. Qual Saf Health Care. 2003;12:ii58–ii63. [PMC free article: PMC1765777] [PubMed: 14645897]
39.
Houchens R, Elixhauser A, Romano P. AHRQ PSNet . Rockville, MD: Agency for Healthcare Research and Quality; [Accessed April 28, 2008]. How often are potential “patient safety events” present on admission. Available at: www​.psnet.ahrq.gov/resource​.aspx?resourceID=6891.
40.
Naessens JM, Campbell CR, Berg B, et al. Impact of diagnosis timing indicators on measures of safety, comorbidity, and case mix groupings from administrative data sources. Med Care. 2007;45:781–788. [PubMed: 17667313]
41.
AHRQ quality indicators. Patient safety indicators SAS Software documentation, Ver. 2.1, Rev. 1. [Accessed April 19, 2008]. Available at: www​.qualityindicators​.ahrq.gov/psi_archive.htm.
42.
Elixhauser A, Steiner C, Harris DR, et al. Comorbidity measures for use with administrative data. Med Care. 1998;36:8–27. [PubMed: 9431328]
43.
Romano PS. Asking too much of administrative data? J Am Coll Surg. 2003;196:337–338. [PubMed: 12595063]
44.
Halm EA, Lee C, Chassin MR. Is volume related to outcome in health care? A systematic review and methodologic critique of the literature. Ann Intern Med. 2002;137:511–520. [PubMed: 12230353]
45.
Fletcher KE, Davis SQ, Underwood W, et al. Systematic review: Effects of resident work hours on patient safety. Ann Intern Med. 2004;141:851–857. [PubMed: 15583227]
46.
Poulose BK, Ray WA, Arbogast PG, et al. Resident work hour limits and patient safety. Ann Surg. 2005;241:847–856. [PMC free article: PMC1357165] [PubMed: 15912034]
47.
Shetty KD, Bhattacharya J. Changes in hospital mortality associated with residency work-hour regulations. Ann Intern Med. 2007;147:73–80. [PubMed: 17548403]

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this page (327K)

Other titles in this collection

Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to PubMed

Similar articles in PubMed

See reviews...See all...

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...