U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Cover of Developing a Threshold for Small VA Hospitals: Evidence Brief on Quality and Safety

Developing a Threshold for Small VA Hospitals: Evidence Brief on Quality and Safety

, MD, MPH, MS, , MS, , MPH, and , MD, MPH.

Washington (DC): Department of Veterans Affairs (US); .

PREFACE

Quality Enhancement Research Initiative's (QUERI) Evidence-based Synthesis Program (ESP) was established to provide timely and accurate syntheses of targeted healthcare topics of particular importance to Veterans Affairs (VA) managers and policymakers, as they work to improve the health and healthcare of Veterans. The ESP disseminates these reports throughout VA.

QUERI provides funding for four ESP Centers and each Center has an active VA affiliation. The ESP Centers generate evidence syntheses on important clinical practice topics, and these reports help:

  • develop clinical policies informed by evidence,
  • guide the implementation of effective services to improve patient outcomes and to support VA clinical practice guidelines and performance measures, and
  • set the direction for future research to address gaps in clinical knowledge.

In 2009, the ESP Coordinating Center was created to expand the capacity of QUERI Central Office and the four ESP sites by developing and maintaining program processes. In addition, the Center established a Steering Committee comprised of QUERI field-based investigators, VA Patient Care Services, the Office of Quality and Performance, and Veterans Integrated Service Networks (VISN) Clinical Management Officers. The Steering Committee provides program oversight, guides strategic planning, coordinates dissemination activities, and develops collaborations with VA leadership to identify new ESP topics of importance to Veterans and the VA healthcare system.

Comments on this evidence brief are welcome and can be sent to Nicole Floyd, ESP Coordinating Center Program Manager, at vog.av@dyolf.elocin

INTRODUCTION

Small hospitals have less opportunity for achieving economies of scale and constraints on the range of services that can be provided. As the demand for acute medical and surgical beds declines, the number of facilities with less than 30 beds is projected to increase over the next 10 years. In light of these challenges, the Veterans Health Administration (VHA) is assessing alternative strategies for delivering high-quality healthcare to Veterans in areas served by small VA hospitals. The objective of this Evidence Brief was to identify and critically evaluate evidence regarding a size threshold for small general medical/surgical hospitals to maintain safe and high-quality care.

The relationship between quality and hospital size or volume is complex. As a 1984 review put it, “It is also widely acknowledged that bigger hospitals are different kinds of organizations than smaller hospitals, with different structures, processes and output, so that size cannot be treated as an isolated variable.”[1] Several patient, provider, and hospital characteristics that are believed to affect the relationship between hospital size and measurement of quality of care are given in Table 1. In order to isolate the effect of hospital size, a study must account for the potentially confounding effects of some or all of these factors.

Table 1. Some factors known or thought to affect the relationship between hospital size and quality of care.

Table 1

Some factors known or thought to affect the relationship between hospital size and quality of care.

SCOPE AND METHODS

KEY QUESTIONS

  1. Is there a clear, consistent relationship between hospital size and quality of care?
  2. What is the evidence regarding a minimum size threshold for small general medical/surgical hospitals to maintain safe and high-quality care?

MEASURES OF HOSPITAL SIZE

We use the terms “volume” and “size” interchangeably. Measures of hospital size or volume include number of beds, average daily census, and annual discharges. In small hospitals, variation in occupancy can make the relationship between these measures less reliable,[3] but beds and annual discharges are the most accessible and commonly used measures. For the second question, we sought studies that evaluated specific threshold values or cut-off points below 100 beds.

TYPE OF HOSPITAL SERVICES

We focused on general medical and surgical services. We excluded subspecialty services, such as oncology, cardiac surgery, and specialty surgery. Research findings on physician volume for particular procedures differ sharply from findings on hospital volume or size. For example, in most of the classic studies of procedure volume and outcomes conducted in the 1980's, hospital size did not predict surgical mortality whereas physician volume did.[4]

OUTCOMES

We included inpatient medical quality, inpatient surgical quality, and critical safety events (including mortality and all-cause readmissions). We excluded studies of cost or efficiency if neither the inputs nor the outputs had a quality component.

To find studies that used mortality as an outcome, we used the reference list of a systematic review[5] and supplemented this with an electronic search of PubMed for the period from January 1, 2010 to December 12, 2012. We relied primarily on the results from the PubMed search (2010 to 2012) to find studies of other outcomes. The PubMed search used the Medical Subject Headings ‘Hospital Bed Capacity, under 100,’ ‘Hospitals, Low-Volume,’ and ‘Hospitals, Rural,’ combined with key word searching for hospital characteristics, size and volume.

We used Google Scholar searches and reference lists from other articles to supplement the PubMed search. We did not apply strict inclusion criteria to these supplemental searches. Rather, we looked for studies that provide insight into the relationship between hospital size and quality, with a particular focus on studies in the VA population or in settings and populations with similarities to the VA.

All data abstraction, quality assessment, and data synthesis were first completed by one reviewer and then checked by another.

RESULTS

KEY QUESTION #1. Is there a clear, consistent relationship between hospital size and quality of care?

We found two systematic reviews that evaluated the relationship between hospital size and quality of care outcomes.[5, 6]

The earlier review (2008) presented a qualitative analysis of studies that evaluated the effects of hospital size on quality improvement, adverse events, and mortality outcomes.[6] Among 92 articles included in the review,[6] 18 used hospital size (number of beds) as a variable of interest. [4, 7-23] Overall, as shown in Table 2, a relationship between hospital size and quality measures was either not found (for adverse events) or was inconsistent (for other measures).

Table 2. Results of Hearld systematic review (2008)[6].

Table 2

Results of Hearld systematic review (2008)[6].

The 2012 review conducted by Fareed et al., which concluded that smaller hospitals had higher mortality, had serious flaws.[5] Among the eight studies included in the Hearld 2008 review that evaluated mortality,[4, 7, 14-18, 20] only three[16-18] were included in the Fareed 2012 meta-analysis.[5] Fareed's pooled analysis of these three studies found that hospitals with more beds have lower odds of patient mortality compared with hospitals with fewer beds (Odds Ratio = 0.775, 95% CI or p-value not reported). The validity of this finding is highly questionable. First, it is based on a small subset of available studies, ignoring disparate findings from the remaining studies. Second, the Fareed review appears to have misinterpreted the data from the original study by Hartz et al. (1989).[16] From the Hartz 1989 study, the Fareed meta-analysis used mortality rates that were adjusted for severity of illness, which favored larger hospitals (mean mortality per 1000 patients, small hospitals=117.8 versus large hospitals=114.7). However, when Hartz et al. adjusted for both severity of illness and hospital characteristics, mortality was significantly lower for smaller hospitals (mean mortality per 1000 patients, small hospitals=115.6 versus large hospitals=116.9).

KEY QUESTION #2. What is the evidence regarding a minimum size threshold for small general medical/surgical hospitals to maintain safe and high-quality care?

Mortality

Two studies of mortality included in the 2012 systematic review by Fareed et al.[5] defined a threshold below 100 beds (Table 3).[16, 18]

Table 3. Results of studies evaluating thresholds below 100 beds.

Table 3

Results of studies evaluating thresholds below 100 beds.

Both studies used 30-day mortality rates derived by the U.S. Health Care Financing Administration (HCFA). In the 1989 study, after adjustment for severity of illness and various hospital characteristics, mean mortality rate per 1000 patients was higher for the largest hospitals (87.5 percentile) than for the smallest ones (12.5 percentile), but not dramatically so.[16] In the 2001 study, the mortality rates for the bottom and upper quartiles, adjusted for hospital and market characteristics, were not significantly different.

These studies probably have limited applicability to the VA hospital system. Unlike VA hospitals, most of the hospitals in these studies were private (76.4% to 85.8%) and ownership was found to be a significant predictor of mortality in both studies. Neither study adjusted for membership in a multihospital system such as the VA. Small hospitals that are part of a multihospital system can take advantage of economies of scale in ways that small independent hospitals likely cannot, and have the advantage of access to more extensive communication and clinical resources than those that are not part of a larger system and, as a result, could potentially function at higher levels than same-size independent hospitals.

The remaining 14 studies in the Fareed review evaluated bed size thresholds of 100 and above and/or evaluated the quality of subspecialty services that are not available in smaller hospitals. In our search, we found three additional studies that evaluated number of beds as a measure of hospital size.[25-27] Two of these evaluated bed size thresholds of 200.[26, 27] The other, a study of critically ill patients with congestive heart failure in British Columbia, presented a scatterplot indicating that a substantial percentage of hospitals had fewer than 50 acute care beds.[25] The scatterplot (reprinted here as Figure 1) suggests there was no significant relationship, a finding confirmed in a logistic regression analysis.[25]

Figure 1. Relationship between size of hospital as indicated by number of acute care beds, and hospital mortality for patients who had congestive heart failure and who were admitted to intensive care units in British Columbia from 1994 to 1997 (From Dubord 2010)[25].

Figure 1

Relationship between size of hospital as indicated by number of acute care beds, and hospital mortality for patients who had congestive heart failure and who were admitted to intensive care units in British Columbia from 1994 to 1997 (From Dubord 2010)[25]. (more...)

Evidence from Critical Access Hospitals

Evidence on the quality of care in Critical Access Hospitals (CAHs) is also relevant for examining the relationship between hospital size and quality of patient care. The Medicare Rural Hospital Flexibility Program of the 1997 Balanced Budget Act defined CAHs as “hospitals with no more than 25 acute care beds and located more than 35 miles from the nearest hospital.”[28]

CAHs score lower on quality measures and have higher mortality than non-CAHs. In the first of two high-quality retrospective studies that compared CAHs with non-CAHs, CAHs were less likely to meet performance indicators related to acute myocardial infarction, congestive heart failure, and pneumonia (Table 4).[28] The CAHs also had higher mortality related to these conditions. However, patient satisfaction in CAHs was higher than in non-CAH hospitals (not shown).

Table 4. Results from Joynt et al. [28].

Table 4

Results from Joynt et al. [28].

In the other high-quality study,[29] CAHs had higher mortality among patients with ischemic stroke (risk-standardized OR, 11.9% vs. 10.9%; P<0.001). However, no difference was found between CAHs and non-CAHs in 30-day risk-standardized readmission rates in patients with ischemic stroke (13.7% vs. 13.7%; P=0.3).[29] Both studies adjusted for the potential confounding effects of differences between CAHs and non-CAHs in various patient and hospital characteristics, although it is not possible to be certain that the adjustment for case mix was adequate.

Findings from these studies may have limited applicability to the small VA hospitals that are near larger non-VA hospitals. More importantly, neither study adjusted for membership in a multihospital system or for health system features associated with membership in a multihospital system. Joynt and her colleagues as well as advocates for critical access hospitals have noted that being a member of a multihospital system is associated with better outcomes for acute coronary syndromes, congestive heart failure, and pneumonia.[30, 31] Some features of the VA system that may be lacking in non-VA comparators include centralized resource planning; linkage to high-quality clinic services1; opportunities to obtain needed resources and knowledge through interaction with specialists at larger, referral VA hospitals; use of a centralized clinical information system; collaboration on transfer policy and specialized service delivery; and quality monitoring and control systems.

Advocates for CAH hospitals have argued that participation in voluntary multihospital quality monitoring and improvement programs may also improve outcomes in small hospitals.[33, 34] It is not clear whether or not this is true. A third study,[35] an annual national report on CAH quality published by the Flex Monitoring Team, reported that voluntary participation in the Centers for Medicare and Medicaid Services (CMS) Hospital Compare program increased from 41% of CAHs in 2004 to 74% of CAHs in 2010. Using the 2010 data from the CMS Hospital Compare public reporting database, Casey reported that, although CAHs improved from 2006 to 2010 on process of care measures, they still had lower scores than non-CAH rural and urban hospitals. Data on outcome measures were sparse. In patients with acute myocardial infarction, for example, no CAH had a rate that was statistically different from the U.S. rate for all hospitals, but only 8% of CAHs reported an AMI mortality rate, and information on patient characteristics was not reported.

Relation of Quality to Efficiency

Research on hospital size and quality overlaps with research on hospital efficiency. A widely held hypothesis about this relationship is that “health care quality improvement can reduce resource use by eliminating medical errors and unnecessary procedures.”[36]

Estimates of a threshold for financial viability are based on efficiency studies. Although we did not search specifically for articles about cost or efficiency, we identified several articles that argue for evaluating quality as an input or an output in efficiency studies. As Mutter and colleagues wrote:

“There have been numerous calls in the literature for the explicit incorporation of controls for quality in frontier studies. In discussing the SFA literature, Li and Rosenman (2001) note that “much empirical work in estimating hospital costs has been criticized as failing to control for quality in the model, which gives rise to possible biases” (p. 78). Indeed, Folland and Hofler (2001), McKay, Deily, and Dorner (2002/2003), and Rosko (2004) note the difficulty in obtaining data that adequately adjusts for quality and express caution about the effects of omitting quality variables in SFA2studies.”[37]

In 2007, for example, the AQA3defined an “efficiency of care” measure as “… a measure of cost of care associated with a specified level of quality of care.”[38] In studies of hospital efficiency, omission of quality variables or use of “structural” measures of quality such as teaching hospital status can lead to biased inefficiency estimates. Including validated quality measures among the inputs can affect which hospitals are classified as “inefficient,” and efficiency measures may be sensitive to which quality measures are used.[36, 37, 39, 40]

In a systematic review of 172 efficiency studies, the Southern California/RAND Evidence-based Practice Center (EPC) found that, although very few had included quality measures as inputs or outputs, those that did so provided more valid measurement of the relationship between inputs (e.g., hospital size) and efficiency.[41, 42]

There is consistent evidence that belonging to a network of hospitals improves efficiency,[43, 44] a relationship that persists when there is adjustment for quality measures.[39] However, there is less direct evidence of the effect of networks on hospital quality, and none that relates hospital size and network membership to quality or safety.

Evidence from Federal and vA Hospitals

We did not find any published studies that addressed a quality or safety threshold for Federal or VA facilities. Six studies of Federal or VA facilities that examined efficiency or quality may be relevant to decision-making about a minimum size threshold.[45-51]

Two of the studies focused on efficiency, without accounting for its relationship with quality.[48, 49]

Four others examined efficiency in relation to quality. Two of these, which have limited relevance to defining a size threshold, were primarily intended to demonstrate the use of stochastic frontier analysis (SFA) in budgeting and resource allocation decisions. Yaisawarng and Burgess (2006) used SFA to evaluate data on patient satisfaction, access, and costs from all VA hospitals. Their purpose was primarily to compare the Veterans Equitable Resource Allocation (VERA) system to a performance-based process that used SFA.[51] Gao, Campbell and Lovell (2006) also included patient satisfaction in their SFA model, and demonstrated benchmarking and budgeting applications of SFA by projecting an optimal budget for each VA facility.

We were unable to obtain one4of the two remaining studies in time for this report.[47]

The other study, Gao et al. (2011), is potentially very relevant. It used 2008 VA National Patient Care Database (NPCD) 30-day mortality data, Hospital Quality Alliance (HQA) process measures (for inpatients) and Healthcare Effectiveness Data and Information Set (HEDIS) indicators (for outpatients), as well as satisfaction data and data on the characteristics from all 138 VA medical centers. The study found that “half of all VA medical centers operate at a clinical efficiency level just 8 percent below the ideal” (median efficiency level, 1.08) and found only modest variation between the most (1.02) and least (1.30) efficient centers. In addition, the study found a significant correlation between greater efficiency and high performance on HQA measures, but no relationship between efficiency and thirty-day, risk-adjusted mortality ratios for VA-hospitalized patients in general or intensive care unit patients in particular; there was also no difference in outpatient quality measures using HEDIS.[46]

These results indicate that, when some aspects of quality are taken into account, variation in efficiency may be less than what would be expected from estimates based on non-VA efficiency studies. Second, it supports the hypothesis that a system with greater integration, accountability, and financial incentives has less variation in efficiency and better quality than comparable hospitals that are not part of an integrated system. Third, it is probable that the data set used in this study could be used to identify small hospitals that are outliers in efficiency, quality, or both; information that might help inform a selection of a minimum threshold for hospital size.

These results also give rise to consideration of whether there are still opportunities for increasing system integration with the aim of improving results at small hospitals. While we did not systematically investigate this question, we offer one example of a relevant study. Sales and colleagues evaluated the association between level of coordination of case cardiology consultation services and the processes for transferring patients with acute coronary syndrome from primary to tertiary hospitals for specialized diagnostic tests and therapies for acute coronary syndromes (e.g., cardiac catheterization and coronary revascularization).[50] The study found that the likelihood of transfer to a tertiary hospital was significantly increased by the presence of a referral coordinator at the primary VHA hospital (OR, 6.28; 95% CI, 2.92 to 13.48), but significantly decreased by the presence of a VHA staff cardiologist on site and a referral coordinator at the tertiary VHA hospital (OR, 0.45; 95% CI, 0.27 to 0.77). Consistent with the literature that evaluates the effects of being a member of an integrated multihospital system, this study demonstrated that clinical integration factors, such as the presence of a referral coordinator, can have important impacts on quality of care, and that some primary VHA hospitals may not be as fully integrated as others.

DISCUSSION

As part of their efforts to assess alternative strategies for making decisions on small hospitals, VHA is interested in considering the evidence on the relationship between hospital size and quality of care. Specifically, the VHA requested an independent third party evidence review for the purpose of recommending an appropriate threshold for considering mission realignment for small hospitals with much lower volumes than 100 beds. An optimal approach to evaluating the reliability of using hospital size threshold values as an independent predictor of healthcare quality would first involve demonstration of a consistent relationship between hospital size and quality. Then, if such a relationship exists, the next step would be to compare outcomes for patients above and below a priori established thresholds, controlling for potential confounders from the categories of patient, provider and hospital characteristics, and replicating findings in a validation sample.

Overall, a relationship between hospital size and quality measures was either not found (for adverse events) or was inconsistent (for other measures). Further, the literature on thresholds below 100 beds is sparse, has important methodological limitations, has low applicability to the VA hospital system, and is insufficient to support any recommendations for hospital size thresholds.

The relationship between hospital size and quality of care is extremely complex due to the potential confounding influences of countless other major factors including characteristics of patients, healthcare providers (e.g., qualifications, mix model, etc.), organizational structure and function, scope of services, geographic area, and capacity of other local community health systems.

For small hospitals that also have poor outcomes, a pertinent question is whether the gap in quality can be reduced by improvements in management, system integration, or other quality-related or safety-related interventions. Our review neither confirms nor refutes this hypothesis. Studies that examine the relationship between these factors and variation in outcomes among small hospitals are needed to address this hypothesis.

FUTURE RESEARCH

Studying the Factors that Affect Quality Among Small Hospitals

Additional analysis of VHA or Critical Access Hospital data could be used to examine factors that affect quality among small hospitals. Such an analysis could inform strategies to optimize the performance of small hospitals. The research question is: “Among VHA hospitals with low average daily census (ADC), what are the major factors contributing to high quality and safety of care?”

The datasets used in the 2011 VHA study by Gao et al.[46] (Table 5) and the CAH dataset used in the 2011 study by Joynt et al., could be useful in answering this question. The Gao et al. dataset was constructed to evaluate efficiency but could be used to examine hospital characteristics in relation to quality and safety. Neither dataset can address all potential factors, but either one could address some of the potentially important factors.

Table 5. Data sources from 2011 VHA study by Gao et al.[46].

Table 5

Data sources from 2011 VHA study by Gao et al.[46].

The dependent variables would be quality and safety measures. Independent variables (factors that might affect quality) could include any of those listed in Table 1. Among those listed in Table 1 in this report, the factors of most relevance to a VHA dataset include provider medical training, availability of specialty services (e.g., intensive care, surgical unit), type and ratio of nursing and ancillary staff, geographic isolation, rate of patient use of hospice services, and presence of a referral coordinator. Starting with the VHA data, it would be useful to incorporate measures of proximity to non-VHA facilities and to the nearest VA referral hospital. These data, alongside data on transfers and staffing, could form the basis of a measure of system integration; in particular, data such as those used by Sales et al. to study cardiovascular transfers within VA should be sought.[50]

As noted in Table 1, the scope of inpatient services, such as surgery or intensive care, may be a determinant of outcomes and should be evaluated in this study. This particular factor is important because the findings could inform decisions to modify the scope of inpatient services (for example, inpatient surgery to ambulatory surgery, ED conversion to Urgent Care Clinic, closure of ICU).

Studying the Effects of Hospital Closure

In assessing the missions of small hospitals, in addition to healthcare quality, the VHA plans to take into account the healthcare service capacity of surrounding community resources. Our brief review sought to compare small hospitals to larger ones. Another approach would be to compare the effect on health outcomes and safety of small VHA hospitals to that of alternative resources. We recommend examining the hospital closure literature to identify additional factors that may predict higher rates of success of nearby facilities in continuing to meet the healthcare needs of Veterans.

The scope of the review should include evidence of the effect of closure of a small hospital on the health of a community,[52, 53] particularly with respect to the availability of alternative sources of care. For example, if a smaller hospital closes in a community in which there are better performing, larger hospitals nearby, patient outcomes may actually improve as they transfer care to the larger institutions. Closure may reduce access to care, however, if other options are not available to patients because of geographic or financial limitations.

REFERENCES

1.
Scott WR, Flood AB. Review Article: Costs and Quality of Hospital Care: a Review of the Literature. Medical Care Research and Review. 1984;41(4):213–261. [PubMed: 10300077]
2.
Cutler DM. Making sense of medical technology. Health Aff (Millwood). 2006;25(2):w48–50. [PubMed: 16464902]
3.
Dalton K, Holmes M, Slifkin R. Unpredictable Demand and Low-Volume Hospitals. NC RHRP Findings Brief. 2003;75
4.
Burns LR, Wholey DR. The effects of patient, hospital, and physician characteristics on length of stay and mortality. Medical Care. 1991;29(3):251–71. [PubMed: 1997754]
5.
Fareed N. Size matters: a meta-analysis on the impact of hospital size on patient mortality. International Journal of Evidence-Based Healthcare. 2012;10(2):103–111. [PubMed: 22672599]
6.
Hearld LR, et al. Review: how do hospital organizational structure and processes affect quality of care?: a critical review of research methods. Medical Care Research & Review. 2008;65(3):259–99. [PubMed: 18089769]
7.
al-Haider AS, Wan TT. Modeling organizational determinants of hospital mortality. Health Services Research. 1991;26(3):303–23. [PMC free article: PMC1069827] [PubMed: 1869442]
8.
Anderson MA, Helms LB. Comparison of continuing care communication. Image -the Journal of Nursing Scholarship. 1998;30(3):255–60. [PubMed: 9753841]
9.
Basu J, et al. Managed care and preventable hospitalization among Medicaid adults. Health Services Research. 2004;39(3):489–510. [PMC free article: PMC1361021] [PubMed: 15149475]
10.
Bradley EH, et al. A qualitative study of increasing beta-blocker use after myocardial infarction: Why do some hospitals succeed? JAMA. 2001;285(20):2604–11. [PubMed: 11368734]
11.
Brennan TA, et al. Hospital characteristics associated with adverse events and substandard care. JAMA. 1991;265(24):3265–9. [PubMed: 2046108]
12.
Elixhauser A, et al. Volume thresholds and hospital characteristics in the United States. Health Affairs. 2003;22(2):167–77. [PubMed: 12674419]
13.
Finkelstein BS, et al. Patient and hospital characteristics associated with patient assessments of hospital obstetrical care. Medical Care. 1998;36(8 Suppl):AS68–78. [PubMed: 9708584]
14.
Foxman B, et al. Acute pyelonephritis in US hospitals in 1997: hospitalization and in-hospital mortality. Annals of Epidemiology. 2003;13(2):144–50. [PubMed: 12559674]
15.
Hannan EL, et al. A longitudinal analysis of the relationship between in-hospital mortality in New York State and the volume of abdominal aortic aneurysm surgeries performed. Health Services Research. 1992;27(4):517–42. [PMC free article: PMC1069892] [PubMed: 1399655]
16.
Hartz AJ, et al. Hospital characteristics and mortality rates. New England Journal of Medicine. 1989;321(25):1720–5. [PubMed: 2594031]
17.
Keeler EB, et al. Hospital characteristics and quality of care. JAMA. 1992;268(13):1709–14. [PubMed: 1527880]
18.
Mukamel DB, Zwanziger J, Tomaszewski KJ. HMO penetration, competition, and risk-adjusted hospital mortality. Health Services Research. 2001;36(6 Pt 1):1019–35. [PMC free article: PMC1089276] [PubMed: 11775665]
19.
Rogers AE, et al. The working hours of hospital staff nurses and patient safety. Health Affairs. 2004;23(4):202–12. [PubMed: 15318582]
20.
Silber JH, et al. Hospital and patient characteristics associated with death after surgery. A study of adverse occurrence and failure to rescue. Medical Care. 1992;30(7):615–29. [PubMed: 1614231]
21.
Slonim AD, et al. Hospital-reported medical errors in children. Pediatrics. 2003;111(3):617–21. [PubMed: 12612245]
22.
Vaughn TE, et al. Organizational predictors of adherence to ambulatory care screening guidelines. Medical Care. 2002;40(12):1172–85. [PubMed: 12458300]
23.
Zhan C, et al. The effects of HMO penetration on preventable hospitalizations. Health Services Research. 2004;39(2):345–61. [PMC free article: PMC1361011] [PubMed: 15032958]
24.
Helms LB, Anderson MA. Comparison of continuing care communication. Image -the Journal of Nursing Scholarship. 1998;30(3):256–261. [PubMed: 9753841]
25.
Dubord J, et al. In-hospital death of critically ill patients who have congestive heart failure: does size of hospital matter? Am J Med Qual. 2010;25(2):95–101. [PubMed: 20145195]
26.
Ghaferi AA, et al. Hospital characteristics associated with failure to rescue from complications after pancreatectomy. J Am Coll Surg. 2010;211(3):325–30. [PubMed: 20800188]
27.
Heidenreich PA, et al. Patient and hospital characteristics associated with traditional measures of inpatient quality of care for patients with heart failure. Am Heart J. 2012;163(2):239–45 e3. [PubMed: 22305842]
28.
Joynt KE, et al. Quality of care and patient outcomes in critical access rural hospitals. JAMA: The Journal of the American Medical Association. 2011;306(1):45. [PMC free article: PMC3337777] [PubMed: 21730240]
29.
Lichtman JH, et al. 30-Day Risk-Standardized Mortality and Readmission Rates After Ischemic Stroke in Critical Access Hospitals. Stroke. 2012;43(10):2741–2747. [PMC free article: PMC3547601] [PubMed: 22935397]
30.
Chukmaitov AS, et al. Variations in inpatient mortality among hospitals in different system types, 1995 to 2000. Medical Care. 2009;47(4):466–473. [PubMed: 19238101]
31.
Hines S, Joshi M. Variation in quality of care within health systems. Joint Commission Journal on Quality and Patient Safety. 2008;34(6):326–332. [PubMed: 18595378]
32.
Witness Testimony of Gerald M. Cross, M.D., FAAFP, Veterans Health Administration, Acting Principal Deputy Under Secretary for Health, U.S. Department of Veterans Affairs, in Joint House-Senate Field Hearing on Issues Facing Veterans in the Rural Areas of Appalachia 5/29/2007. [2/20/2013]. http://veterans​.house​.gov/witness-testimony​/dr-gerald-m-cross-md-faafp .
33.
Lipsky MS, Glasser M. Critical access hospitals and the challenges to quality care. JAMA: The Journal of the American Medical Association. 2011;306(1):96. [PubMed: 21730248]
34.
Moscovice IS, Casey MM. Quality of Care in Critical Access Hospitals. JAMA. 2011;306(15):1653. [PubMed: 22009094]
35.
Casey M, Burlew M, Moscovice I. Critical Access Hospital Year 2 Hospital Compare Participation and Quality Measure Results. Flex Monitoring Team Briefing Paper. 2007
36.
Rosko MD, Mutter RL. Stochastic Frontier Analysis of Hospital Inefficiency A Review of Empirical Issues and an Assessment of Robustness. Medical Care Research and Review. 2008;65(2):131–166. [PubMed: 18045984]
37.
Mutter RL, Rosko MD, Wong HS. Measuring hospital inefficiency: the effects of controlling for quality and patient burden of illness. Health Services Research. 2008;43(6):1992–2013. [PMC free article: PMC2614001] [PubMed: 18783458]
38.
AQA. AQA Principles of ‘Efficiency’ Measures. 2006.
39.
Rosko MD, Mutter RL. What have we learned from the application of stochastic frontier analysis to US hospitals? Medical Care Research and Review. 2011;68(1 suppl):75S–100S. [PubMed: 20519428]
40.
Timbie JW, Normand S-L. A Comparison of Methods for Combining Quality and Efficiency Performance Measures: Profiling the Value of Hospital Care Following Acute Myocardial Infarction. Statistics in Medicine. 2007;27(9):1351–70. [PubMed: 17922491]
41.
Hussey PS, et al. A systematic review of health care efficiency measures. Health Services Research. 2009;44(3):784–805. [PMC free article: PMC2699907] [PubMed: 19187184]
42.
McGlynn EA. Identifying, categorizing, and evaluating health care efficiency measures: final report. Agency for Healthcare Research and Quality; 2008.
43.
Rosko MD, Proenca J. Impact of network and system use on hospital X-inefficiency. Health Care Management Review. 2005;30(1):69–79. [PubMed: 15773256]
44.
Rosko MD, et al. Hospital Inefficiency: What is the Impact of Membership in Different Types of Systems? Journal Information. 2007;44(3) [PubMed: 18038868]
45.
Gao J, Campbell J, Lovell K. Equitable resource allocation and operational efficiency evaluation. International Journal of Healthcare Technology and Management. 2006;7:143–167.
46.
Gao J, et al. Variations in efficiency and the relationship to quality of care in the Veterans health system. Health Affairs. 2011;30(4):655–663. [PubMed: 21471486]
47.
Harrison JP, Coppola MN. The Impact of Quality and Efficiency on Federal Healthcare. International Journal of Public Policy. 2007;2(3):356–71.
48.
Harrison JP, Coppola MN, Wakefield M. Efficiency of federal hospitals in the United States. Journal of Medical Systems. 2004;28(5):411–422. [PubMed: 15527029]
49.
Harrison JP, Ogniewski R. An Efficiency Analysis of Veterans Health Administration Hospitals. Military Medicine. 2005;170(7):607–611. [PubMed: 16130643]
50.
Sales A, et al. The association between clinical integration of care and transfer of veterans with acute coronary syndromes from primary care VHA hospitals. BMC health services research. 2005;5(1):2. [PMC free article: PMC545996] [PubMed: 15649313]
51.
Yaisawarng S, Burgess JF Jr. Performance-based budgeting in the public sector: an illustration from the VA health care system. Health economics. 2006;15(3):295–310. [PubMed: 16331724]
52.
Bindman AB, Keane D, Lurie N. A public hospital closes. Impact on patients' access to care and health status. JAMA. 1990;264(22):2899–904. [PubMed: 2232084]
53.
Buchmueller TC, Jacobson M, Wold C. How far to the hospital? The effect of hospital closures on access to care. J Health Econ. 2006;25(4):740–61. [PubMed: 16356570]

Footnotes

1

House of Representatives hearings, April 18, 2007, statement of Dr. Gerald Cross said: “We looked at the quality of care comparing rural versus urban clinics. We looked at 40 standard measures of quality, they were virtually identical across the range.”[32]

2

Stochastic Frontier Analysis

3

The AQA is an alliance of over 30 organizations including the Joint Commission, the American College of Cardiology, American College of Physicians, America's Health Insurance Plans, and the Agency for Healthcare Research and Quality. See http://www​.aqaalliance​.org/Files/AQAMembershipList2012.pdf for the most recent membership list.

4

The abstract of the missing study [47]. Harrison JP, Coppola MN. The Impact of Quality and Efficiency on Federal Healthcare. International Journal of Public Policy. 2007;2(3):356–71.. “This study evaluates the efficiency of hospitals in the USA and incorporates quality as a new measure to identify the ‘value frontier’ of federal healthcare services. It analyses the technical efficiency of federal hospitals using a variable returns to scale, input oriented, Data Envelopment Analysis (DEA) methodology. Data for 157 federal hospitals in 1997 and 175 in 2000 were analysed using DEA to measure hospital efficiency. Results indicate overall efficiency in federal hospitals improved from 65% in 1997 to 68% in 2000. At the macroeconomic level this is important because it indicates that value associated with expenditures in the federal hospital industry is increasing. The study has policy implications because many federal hospitals are facing potential budget cuts or closure due to limited healthcare resources. This article provides an innovative approach to measuring cost and quality as the federal government attempts to realign scarce healthcare resources.”

Prepared for: Department of Veterans Affairs, Veterans Health Administration, Quality Enhancement Research Initiative, Health Services Research & Development Service, Washington, DC 20420

Prepared by: Evidence-based Synthesis Program (ESP), Coordinating Center, Portland VA Medical Center, Portland, OR, Mark Helfand, M.D., M.P.H., M.S., Director

Recommended citation: Helfand M, Peterson K, Carson S, Humphrey L. Developing a Threshold for Small VA Hospitals: Evidence Brief on Quality and Safety, VA-ESP Project #09-199; 2013.

This report is based on research conducted by the Evidence-based Synthesis Program (ESP) Coordinating Center located at the Portland VA Medical Center, Portland, OR and funded by the Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development, Quality Enhancement Research Initiative. The findings and conclusions in this document are those of the author(s) who are responsible for its contents; the findings and conclusions do not necessarily represent the views of the Department of Veterans Affairs or the United States government. Therefore, no statement in this article should be construed as an official position of the Department of Veterans Affairs. No investigators have any affiliations or financial involvement (e.g., employment, consultancies, honoraria, stock ownership or options, expert testimony, grants or patents received or pending, or royalties) that conflict with material presented in the report.

Created: February 2013.

Bookshelf ID: NBK384617PMID: 27606396

Views

Other titles in this collection

Related information

Similar articles in PubMed

See reviews...See all...

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...