U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Hughes RG, editor. Patient Safety and Quality: An Evidence-Based Handbook for Nurses. Rockville (MD): Agency for Healthcare Research and Quality (US); 2008 Apr.

Cover of Patient Safety and Quality

Patient Safety and Quality: An Evidence-Based Handbook for Nurses.

Show details

Chapter 45AHRQ Quality Indicators

.

Author Information and Affiliations

What Are the AHRQ Quality Indicators?

The Quality Indicators (QIs) developed and maintained by the Agency for Healthcare Research and Quality (AHRQ) are one response to the need for multidimensional, accessible quality measures that can be used to gage performance in health care. The QIs are evidence based and can be used to identify variations in the quality of care provided on both an inpatient and outpatient basis. These measures are currently organized into four modules: the Prevention Quality Indicators (PQIs),1 the Inpatient Quality Indicators (IQIs),2 the Patient Safety Indicators (PSIs),3 and the Pediatric Quality Indicators (PDIs).4 A brief description of each module appears in Table 1.

Table 1

Table 1

The AHRQ Quality Indicators modules

Origins and History

In 1994, in response to requests for assistance from State-level data organizations and hospital associations with hospital inpatient data collection systems, the AHRQ developed a set of measures that used hospital administrative data provided by the Healthcare Cost and Utilization Project (HCUP), an ongoing Federal-State-private sector partnership that was established to develop uniform databases. As a result, these measures, called the HCUP Quality Indicators, were developed to take advantage of readily available administrative data and quality measures that had been previously reported in the literature.5 The original HCUP Quality Indicators included 33 measures that could identify avoidable adverse outcomes such as in hospital mortality and complications of procedures; the use of specific inpatient procedures thought to be overused, underused, or misused; and ambulatory care sensitive conditions. These indicators identified potential quality-of-care problems and served as the starting point for further investigation.

In 1998, under contract with AHRQ, researchers at the University of California, San Francisco (UCSF) and the Stanford University Evidence-Based Practice Center (EPC) reviewed and revised the original set of measures.5 This revision served to expand the HCUP Quality Indicators by (1) identifying quality indicators reported in the literature and in use by health care organizations, (2) evaluating both the HCUP Quality Indicators and other indicators using literature reviews and empirical methods, and (3) incorporating risk adjustment. The revised set, now known as the AHRQ QIs, originally included two modules: the PQIs released in April 2002, and the IQIs released in June 2002. Other modules were eventually added based on requests from the user community; specifically, the PSIs were released in May 2003, and the most recent set of measures, the PDIs, were added to the existing QI modules in February 2006. An additional module, the Neonatal Quality Indicators (NQIs), is currently under development and will be released in the near future.

Development of the AHRQ Quality Indicators

The AHRQ QIs were developed from an extensive, iterative process that included interviews from a broad spectrum of organizations that represented QI users and potential users, literature reviews that identified possible quality measures, evaluation of the candidate measures as well as evaluation of several risk-adjustment methods for use with the potential measures, empirical analysis, and validation. The process can be roughly divided into two phases: the first identifies candidate measures or indicators, and the second analyzes the potentially viable measures or indicators.

During development of the QIs, the UCSF-Stanford EPC used the Institute of Medicine’s definition of care quality to guide the development process: “the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge.”6 Based on this definition, six key questions were developed to direct the selection of measures for further evaluation. They were:

  • Which indicators currently in use or described in the literature could be defined using hospital discharge data?
  • What are the quality relationships reported in the literature that could be used to define new indicators using hospital discharge data?
  • What evidence exists for indicators not well represented in the current set of indicators—pediatric conditions, chronic disease, new technologies, and ambulatory care sensitive conditions?
  • Which indicators have literature-based evidence to support face validity, precision of measurement, minimum bias, and construct validity of the indicator?
  • What risk-adjustment method should be suggested for use with the recommended indicators, given the limits of administrative data and other practical concerns?
  • Which indicators perform well on empirical tests of precision of measurement, minimum bias, and construct validity?

Identifying Candidate Indicators

In the first phase of development, the UCSF-Stanford EPC conducted interviews with individuals affiliated with hospital associations, business coalitions, State data groups, Federal agencies, and academia about topics related to quality measurement. The interviews provided background information on measure use, suggested new indicators for potential development, and provided the names of additional individuals within the field who could be contacted for an interview. The interviews also suggested new risk-adjustment methods and assisted in framing the evaluation of potential indicators. With this information and relevant literature, the team developed a framework in which to evaluate the performance of the candidate measure. Table 2 provides an overview of the criteria used to evaluate the potential measures as well as a brief description of each.

Table 2

Table 2

Criteria used to evaluate potential Quality Indicators

The research team also undertook a literature review that was structured in two phases. The first phase identified potential measures within the literature that were applicable to comparisons among providers or among geographic areas. In addition, potential indicators were identified using the various established databases of measures such as those from the Joint Commission for the Accreditation of Healthcare Organizations, Healthy People 2010, and so on. In the second phase of the literature review, the team performed an initial screen of the candidate indicators for relevance and accuracy. If an indicator met the criteria as described in Table 2, it received a comprehensive literature review and empirical evaluation.

The next phase of development was to identify potential risk-adjustment models for each of the selected candidate measures. Users of the QIs preferred a risk-adjustment system that was (1) open with published logic; 2) cost effective with data collection costs minimized and with any additional data collection being well justified; (3) designed using a multiple-use coding system, such as those used for reimbursement; and (4) officially recognized by government, hospital groups, or other organizations. In general, the All Patient Refined-Diagnosis Related Groups (APR-DRGs) tended to fit more of the user preferences than other alternatives considered. In addition, the APR-DRGs were reported to perform as well as or better than other risk-adjustment systems for several conditions.7–9 The APR-DRGs are used in various AHRQ QIs; however, this method is not used with the PDIs, which use a novel and specialized risk-adjustment system that includes the data element Present on Admission (POA), the AHRQ Clinical Classification System, and stratification.

Analyzing Potential Indicators

The next step in the development process was empirical testing of all potential indicators. The primary datasets used were the HCUP Nationwide Inpatient Sample (NIS) and the State Inpatient Database (SID). The NIS is the largest all-payer inpatient care database in the United States, consisting of approximately 8 million hospital stays per year, specifically it consists of discharges of about a 20 percent stratified sample of community hospitals in the country. The SID consists of the universe of inpatient discharge abstracts in participating States, translated into a uniform format. This database encompasses about 90 percent of all community hospital discharges in the nation. More recently, the Kids’ Inpatient Database was used to develop the AHRQ PDIs. This database, currently the only all-payer inpatient care database for children in the United States, contains 2–3 million hospital discharges. For more information about these databases, please go to the AHRQ website at www.ahrq.gov/data/hcup.

The data from these databases were used to test each evaluation criterion that was assessed empirically i.e., precision, bias, and construct validity. The results of the candidate indicators were compared, and those indicators that performed poorly were eliminated. Bias tests were conducted to determine the need for risk adjustment, and then finally, construct validity was evaluated to provide evidence of the nature of the relationship between potential indicators.

The next phase of indicator development used multi-disciplinary clinician panel reviews. The team solicited nominations from professional clinical organizations and hospital associations, that were selected based on the applicability of the specialty or subspecialty to the candidate indicators. Nominees were chosen based on meeting certain criteria. For example, nominees were required to spend at least 30 percent of their work time on patient care, including hospitalized patients. The panelists were selected so that each group had a diverse membership in terms of clinical practice characteristics and settings.

The members of the panel were given a number of documents to evaluate the candidate measures. The documents provided included information about administrative data; coding from the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM); assignment of DRGs and Major Diagnostic Categories (MDCs); and specific definitions for adverse events or complications, preventability, and medical error. Candidate measure information incorporated exclusion and inclusion criteria, the clinical rationale for the indicator, and the specification criteria. A summary of literature-based evidence and empirical rates based on the NIS were provided for reference as well. Finally the panelists were given a list of potential questions regarding indicator definitions that the team planned to explore. Each panelist completed a 10-item questionnaire that asked them to determine the candidate indicator’s ability to screen out conditions present on admission, to identify conditions with high potential for preventability, to identify medical errors, or to evaluate access to high-quality outpatient care. Panelist were also asked to consider potential sources of bias, reporting or charting problems, potential ways of gaming the indicator, and possible adverse effects of implementing the measure. Finally, panelists were invited to suggest changes to the candidate indicator.

After the questionnaires were returned, the team convened a series of conference calls with the panelists to discuss their opinions regarding the candidate measures. Using a modified version of the RAND/UCLA method developed in the 1980s. The RAND/UCLA Appropriateness Method10 is used to synthesize the best available scientific evidence and expert opinion on health care issues. This method is a way to reach formal agreement on how the current science is interpreted by care givers in the real world. For the development of the QIs, the primary goal of the interaction was to allow for and encourage varied opinions about the appropriateness of an indicator. For our purposes, consensus was not the goal of the discussion, and agreement and disagreement on every indicator under consideration was noted. Following each conference call, modifications were made to each indicator as suggested by the panelists. The revised indicators were then redistributed to the panelists, along with questionnaires, and instructions to reevaluate and again rate each indicator based on their current opinion after the conference call discussions. Once the final round of questionnaires was received, the team calculated median scores to determine the degree of agreement among panelists. In addition, the team calculated scores indicating the level of acceptability of the indicator and the dispersion of ratings across the panel. The following criteria covered in the questionnaire were used to summarize the panel’s options on each indicator:

  • Overall usefulness of the indicator, both for internal quality improvement purposes and comparisons between hospitals
  • Likelihood that the indicator measures a complication and not a comorbidity (specifically, present on admission)
  • Preventability of complication
  • Extent to which a complication is due to medical error
  • Likelihood that a complication that occurs is charted
  • Extent that the indicator is subject to bias (systematic differences, such as case mix, that could affect the indicator in a way not related to quality of care)

For area-level indicators, panelists provided feedback on the following areas:

  • Overall usefulness of the indicator, both internally within an area and for comparisons between areas
  • Extent to which an event reflects poor access to quality outpatient care
  • Consistency in terminology for charting the principal diagnosis
  • Extent that the indicator is subject to bias

The next step in the development process involved peer review of the candidate measures. Nominations were sought for clinicians, policy advisors, professors, researchers, and managers in quality improvement to participate on this panel. The group was instructed to provide comments on the indicators with constructive suggestions for content and presentation enhancements.

Once the panel reviews and evaluations were complete, the candidate indicators could go through further empirical testing to refine their definitions. After that, the indicator may undergo further clinical and peer review, which can occur over several rounds until the definition of the indicator is finalized. As with any measure of performance, the process of refinement is ongoing and becomes part of the measure maintenance activities of the measure developer. Figure 1 provides a graphic account of the basic development process.

Figure 1

Figure 1

The AHRQ Quality Indicator development and evaluation process

As a measure developer, AHRQ maintains these measures and on an annual basis, provides revisions to the measures, including ICD-9-CM and DRG code updates, an update to the reference population used in calculating the QIs, and refinement of the specifications based on additional evidence in the literature and user input. Literature reviews are completed on one QI module every year, which allows time for new research to be completed and subsequently published in peer reviewed journals.

What We Know About the AHRQ Quality Indicators

Measuring performance is central to improving the quality of health care. Performance measurement conveys the message of importance—that is, what is important is measured, while what is not measured is considered less important by many. The AHRQ QIs are measures of health care quality that make use of readily available hospital inpatient administrative data. The structure of the indicators consists of definitions based on ICD-9-CM diagnosis and procedure codes. Inclusion and exclusion criteria are based upon DRGs: sex, age, procedure dates, and admission type. The numerator is equal to the number of cases flagged with the complication or situation of interest, for example, postoperative sepsis, avoidable hospitalization for asthma, and death. The denominator is equal to the number of patients considered to be at risk for that complication or situation, for example, elective surgical patients, county population from census data, and so on. The QI rate is equal to the numerator divided by the denominator. As with any type of performance measure, regardless of its data source, there are advantages to using certain measures as well as limitations associated with using them. What is presented below is a review of the data source used by the QIs as well as a review of the indicators by module. The strengths and limitations of the QIs are also discussed.

The AHRQ QIs and Their Data Sources

There are several sources of data that can be used to measure performance, and these data sources can be grouped into the following categories: administrative data (also known as billing or claims data), medical record information, patient-derived data (i.e., surveys);11 confidential reports from providers, and direct observation. All categories of data have strengths and weaknesses and each data source should be evaluated for comprehensiveness or the completeness of data elements as they pertain to individuals, and inclusiveness or the extent to which populations are represented in a particular geographic area.12 The AHRQ QIs use data derived from administrative databases, which is considered a “by-product” of care delivery, i.e., reimbursement to hospitals or physicians or determining insurance eligibility of patients.9

While administrative data were not originally intended to be used in research, these types of databases are often used by researchers in their studies and clearly offer some important advantages, such as the ability to track study subjects over time. Administrative data are also relatively inexpensive to collect and readily available to researchers, administrators, and others. Additional advantages include the large sample sizes associated with this type of data, the ease of collection without interference with the care of the patient, the population-based characteristic of administrative data, and identifiers associated with the data that permit observations across sites and settings of care. The AHRQ QIs can be used with any administrative data set and largely rely on the ICD-9-CM codes for diagnosis and procedures from individual hospitalization data, which are derived from the 2004 Uniform Bill (UB-04). Other information such as patient identifiers, hospitalization descriptors, admission types, insurance information, and charge data can be found on this form. Thus by putting together the range of ICD-9-CM codes and supplementary codes such as E codes* and V codes, and from a creative and clinically informed use of these codes, a picture of a patient’s clinical status and risk factors begins to form (see Table 3).

Table 3

Table 3

Characteristics and Uses of Hospital Administrative Data

The AHRQ QIs are valuable because they are based on widely available data that can be used to assess quality. Theses QI indicators also have uniform definitions and standardized algorithms that can be used with virtually any administrative data set, which allows for comparisons across States, regions, communities, and hospitals.

As with any data source used to assess performance, there are a number of drawbacks to using administrative data to examine the quality of care delivered by health care providers. Despite the large number of ICD-9-CM codes available and the implied detail they contain, these codes do not have operational clinical definitions assigned, which make assignment by coders somewhat variable. While coders are generally formally trained in coding methods and instructed to use the terminology in the medical record, clinicians seldom use a consistent lexicon in their charting. Thus, the meaning of codes without a clinical context, or without the considerations of disease progression, and the interaction of comorbidities can provide an inaccurate clinical picture—limiting the usefulness of the data. Yet despite this limitation, data availability, coding systems, and coding practices are improving, which enhance our ability to identify quality problems as well as success stories, which can be further identified and studied.

The AHRQ QI Modules

Prevention Quality Indicators (PQIs)

The AHRQ PQIs are one set of quality measures that can be used to identify potential problems; follow trends over time; and ascertain disparities across regions, communities, and providers. This module focuses on preventive care services—outpatient services that assist individuals with either staying healthy or managing chronic illness. In these instances, inpatient data can provide information on admissions for ambulatory sensitive conditions that evidence suggests could have been avoided, at least in part through better outpatient care. For example, patients with diabetes may be hospitalized due to complications for their disease if their conditions have not been adequately monitored or if they do not receive education that would allow them to self-manage the disease. There are currently 14 PQIs, listed in Table 4 that measure rates of admissions to the hospital.

Table 4

Table 4

AHRQ Prevention Quality Indicators

Factors such as poor environmental conditions or lack of patient adherence to treatment regimes can result in hospitalization. However, the PQIs provide a good starting point to assess quality of services within a community. The POIs can be used to provide a picture of health care in the community by identifying unmet needs, monitoring how well complications are being avoided in the outpatient setting, assessing access to health care, and comparing the performance of local health care systems across communities.

The PQIs represent the current state of the art in assessing the health care system as a whole, but particularly in the area of ambulatory care, for example, in preventing medical complications for both acute illness and chronic conditions. The PQIs are valuable when calculated at the population or area level and when used by organizations such as public health groups, State data organizations, health plans, large health systems, and other organizations concerned with the health of populations. The PQIs are risk adjusted for age and gender and provide information about potential problems in the community that may require further analysis. The PQIs help answer questions such as

  • Does the admission rate for diabetes complications in my community suggest a problem in the provision of appropriate outpatient care to this population?
  • How does the admission rate for congestive heart failure vary over time and from one region of the country to another?

These are just a few of the questions that the PQIs can address to assist those health care providers with responsibility for the health of a particular population. The PQIs allow for comparisons across States, regions, and local communities over time. The PQIs do not measure hospital quality, but reflect the care provided in the community.

Despite their strengths, there are several considerations when using these indicators. Differences in PQI rates can explain some of the variation across areas but not all. The complexity of the relationship between socioeconomic status and PQI rates makes it difficult to delineate how much of the observed relationships are due to true access to care issues, difficulties in potentially underserved populations, or other patient characteristics unrelated to quality of care that vary systematically by socioeconomic status. Second, the evidence related to potentially avoidable hospital admissions is limited for each indicator because many of the indicators have been developed as parts of sets. Finally, despite the relationships demonstrated at the patient level between higher quality ambulatory care and lower rates of hospital admission, few studies have directly addressed the question of whether effective treatments in outpatient settings would reduce the overall incidence of hospitalizations.

The Inpatient Quality Indicators (IQIs)

The AHRQ IQIs provide information about the quality of medical care delivered in a hospital. This measure set represents the state of the art in measuring the quality of hospital care using inpatient administrative data. The IQIs include measures in the areas of inpatient mortality; utilization of procedures for which there are questions of overuse, underuse, or misuse; and volume of procedures for which there is evidence that a higher volume is associated with lower mortality.

The IQIs that focus on volume are proxy measures of quality and represent counts of admissions in which these procedures were performed. They are based on evidence suggesting that hospitals that perform more of certain procedures—for example, those that are intensive, high-technology, or highly complex—may have better outcomes for those procedures. The provider-level volume IQIs are:

  • Esophageal resection volume
  • Pancreatic resection volume
  • Abdominal aortic aneurysm (AAA) repair volume
  • Coronary artery bypass graft (CABG) volume
  • Percutaneous transluminal coronary angioplasty (PTCA) volume
  • Carotid endarterectomy (CEA) volume

The mortality indicators for inpatient procedures cover procedures for which mortality has been shown to vary across institutions and for which there is evidence that high mortality may be associated with poorer quality of care. The mortality indicators for inpatient surgical procedures are:

  • Esophageal resection mortality rate
  • Pancreatic resection mortality rate
  • AAA repair mortality rate
  • CABG mortality rate
  • PTCA mortality rate
  • CEA mortality rate
  • Craniotomy mortality rate
  • Hip replacement mortality rate

When evaluating mortality rates, the corresponding volumes should be examined in conjunction with the mortality rate because that provides more information about the care delivered. For example, esophageal resection is a complex surgery, and studies have noted that providers with higher volumes have lower mortality rates. These results suggest that providers with higher volumes have some characteristics, either structurally or with regard to processes that influence mortality.

The mortality indicators for inpatient conditions cover conditions for which mortality has been shown to vary substantially across institutions and for which evidence suggests that high mortality may be associated with deficiencies in the quality of care. The mortality indicators for inpatient medical conditions are:

  • Acute myocardial infarction (AMI) mortality rate
  • AMI mortality rate, without transfer cases
  • Congestive heart failure mortality rate
  • Acute stroke mortality rate
  • Gastrointestinal hemorrhage mortality rate
  • Hip fracture mortality rate
  • Pneumonia mortality rate

Also included in the IQIs are utilization indicators that examine procedures whose use varies significantly across hospitals and for which questions have been raised about overuse, underuse, or misuse. High or low rates for these indicators are likely to represent inappropriate or inefficient delivery of care. The procedure utilization indicators are:

  • Cesarean section delivery rate
  • Primary cesarean delivery rate
  • Vaginal birth after cesarean (VBAC) rate, all
  • VBAC rate, uncomplicated
  • Laparoscopic cholecystectomy rate
  • Incidental appendectomy in the elderly rate
  • Bilateral cardiac catheterization rate

There are currently 28 IQIs that are measured at the provider or hospital level, as well as 4 area-level indicators that are suited for use at the population or regional level. These 4 indicators, which are utilization measures, include:

  • CABG area rate
  • Hysterectomy area rate
  • Laminectomy or spinal fusion area rate
  • PTCA area rate

The IQIs can be used by a variety of stakeholders in the health care arena to improve quality of care at the level of individual hospitals, the community, the State, or the Nation. The IQIs represent advancement in assessing quality of care using hospital administrative data. While these data are relatively inexpensive and convenient to use and represent a rich data source that can provide valuable information, like other data sources that have various limitations, the data should be used carefully when assessing and interpreting the quality of health care within an institution.

The Patient Safety Indicators (PSIs)

The PSIs are a set of quality measures that use hospital inpatient discharge data to provide a perspective on patient safety.13 Specifically, the PSIs identify problems that patients experience through contact with the health care system and that are likely amenable to prevention by implementing system level changes. The problems identified are referred to as complications or adverse events. There are currently 27 PSIs that are defined on two levels: the provider level and the area level. They are risk adjusted using a model that incorporates DRGs (with and without complications aggregated); a modified comorbidity index based on a list developed by Elixhauser and colleagues;14 and age, sex, and age-sex interactions.

At the provider level, the PSIs present a picture of patient safety within a hospital and provide information about the potentially preventable complication for patients who received their initial care and experienced the complication of care within the same hospitalization. The PSIs use secondary diagnosis ICD-9-CM codes to detect complications and adverse events. The measure set covers a variety of areas such as selected postoperative complications, selected technical adverse events, technical difficulty with procedures, and obstetric trauma and birth trauma. The 20 provider-level PSIs include:

  • Postoperative pulmonary embolism or deep vein thrombosis
  • Postoperative respiratory failure
  • Postoperative sepsis
  • Postoperative physiologic and metabolic derangements
  • Postoperative abdominopelvic wound dehiscence
  • Postoperative hip fracture
  • Postoperative hemorrhage or hematoma
  • Decubitus ulcer
  • Selected infections due to medical care
  • Iatrogenic pneumothorax
  • Accidental puncture or laceration
  • Foreign body left in during procedure
  • Birth trauma—injury to neonate
  • Obstetric trauma—vaginal delivery with instrument
  • Obstetric trauma—vaginal delivery without instrument
  • Obstetric trauma—cesarean section delivery
  • Complications of anesthesia
  • Death in low-mortality DRGs
  • Death among surgical inpatients with treatable serious complications (previously known as Failure to rescue)
  • Transfusion reaction (AB/Rh)

The area-level PSIs capture all cases of the potentially preventable complications that occur in a given area (e.g., metropolitan service areas or counties), either during hospitalization or resulting in subsequent hospitalizations. They are specified to include the principal diagnosis, as well as secondary diagnoses, for the complications of care. The measurement specifications add cases where a patient’s risk of the complication occurred in a separate hospitalization. The seven area-level PSIs are:

  • Foreign body left in during procedure
  • Iatrogenic pneumothorax
  • Selected infections due to medical care
  • Postoperative wound dehiscence in abdominopelvic surgical patients
  • Accidental puncture or laceration
  • Transfusion reaction
  • Postoperative hemorrhage or hematoma

Widespread consensus exists that health care organizations can reduce patient injuries by improving the environment for safety—from implementing technical changes, such as electronic medical record systems, to improving staff awareness of patient safety risks. Clinical process interventions also present strong evidence for reducing the risk of adverse events related to a patient’s exposure to hospital care. These PSIs can be used to better prioritize and evaluate local and national initiatives. Some potential actions, after an in-depth analysis of the system and process of care, include the following:

  • Review and synthesize the evidence base and best practices from scientific literature.
  • Work with the multiple disciplines and departments involved in care of surgical patients to redesign care based on best practices with an emphasis on coordination and collaboration.
  • Evaluate information technology solutions.
  • Implement performance measurements for improvement and accountability.

The ability to assess all patients at risk for a particular patient safety problem, along with the relative low cost of collecting the data, are particular strengths of the datasets that use administrative data. However, many important areas of interest, such as adverse drug events, cannot currently be monitored well using administrative data and using this data source to identify patient safety events tends to favor specific types of indicators. For example, the PSIs cited in this chapter contain a large proportion of surgical indicators, rather than medical or psychiatric measures, because medical or psychiatric complications are often difficult to distinguish from comorbidities that are present on admission. In addition, medical populations tend to be more heterogeneous than surgical populations, especially elective surgical populations, making it difficult to account for case mix.

While PSIs may be more applicable to patient safety when limited to elective surgical admissions, the careful use of administrative data holds promise to identify problems for further analysis and study. The limitations of this measure set include those inherent with the use of administrative data, clinical accuracy of the discharged-based diagnosis coding, and indicator discriminatory power. Specifically,

  • Administrative data are unlikely to capture all cases of a complication, regardless of the preventability, without false positives and false negatives (sensitivity and specificity).
  • When the codes are accurate in defining an event, the clinical vagueness inherent in the description of the code itself (e.g., hypotension) may lead to a highly heterogeneous pool of clinical states represented by that code.
  • Incomplete reporting is an issue in the accuracy of any data source used for identifying patient safety problems, as medical providers might fear adverse consequences as a result of full disclosure in potentially public records such as discharge abstracts.
  • The heterogeneity of clinical conditions included in some codes, lack of information about event timing available in these datasets, and limited clinical detail for risk adjustment all contribute to the difficulty in identifying complications that represent medical error or that may be at least in some part preventable. These factors may exist for other sources of patient safety data as well. For example, they have been raised in the context of the Joint Commission’s implementation of a sentinel event program geared to identifying serious adverse events that may be related to underlying safety problems.

Yet, despite these issues, the PSIs are a useful tool to identify areas in patient safety that need monitoring and/or intervention for improved patient care.

The Pediatric Quality Indicators (PDIs)

The AHRQ PDIs are a set of quality measures that use hospital administrative data and involve many of the same challenges associated with measure development for the adult population. These challenges include the need to carefully define indicators, establish validity and reliability, detect bias, design appropriate risk adjustment, and overcome challenges of implementation and use. However, as a special population, children require special tailoring of quality measures and risk-adjustment methodologies. The AHRQ PDIs, developed through careful, ongoing research efforts, provide a risk-adjusted tool to identify quality problems for hospitalized children as well as assess the rate of potentially preventable hospitalizations. The AHRQ PDIs currently consist of 18 indicators defined as both provider- and area-level measures. The 13 provider-level PDIs are:

  • Accidental puncture and laceration
  • Decubitus ulcer
  • Foreign body left in during procedure
  • Iatrogenic pneumothorax in neonates
  • Iatrogenic pneumothorax in non-neonates
  • Pediatric heart surgery mortality
  • Pediatric heart surgery volume
  • Postoperative hemorrhage or hematoma
  • Postoperative respiratory failure
  • Postoperative sepsis
  • Postoperative wound dehiscence
  • Selected infections due to medical care
  • Transfusion reaction

Existing risk-adjustment strategies for pediatric patients were not suitable for use with the AHRQ PDIs. Most available schemes apply to specific clinical groups and utilize clinical data not available in administrative databases. The APR-DRG methodology, used for risk adjustment in the adult population, was considered for use in the pediatric population. However, using the APR-DRGs could not adjust for complications in the pediatric population because it resulted in over-adjustment. As a result, different risk adjustment strategies were investigated for potential incorporation into the PDIs. Three important risk-adjustment factors of significance to the pediatric population were identified: (1) reason for admission (including principal procedure), (2) comorbidities, and (3) age and gender. Using a modified-DRG risk adjustment combined with comorbidity adjustment based on the AHRQ Clinical Classification System and age and gender adjustment, the AHRQ PDIs include a novel and specialized risk-adjustment system. They also include stratification, another approach to accounting for case mix. Stratification allows hospitals to identify which segment of the pediatric population accounts for any elevation in rates, creating more user-friendly measures. Tailored stratification schemes are available for six of the PDIs: accidental puncture and laceration, decubitus ulcer, iatrogenic pneumothorax, postoperative hemorrhage or hematoma, postoperative sepsis, and selected infections due to medical care. Despite these efforts to account for risk, it is anticipated that further research on pediatric risk adjustment will be important for assessing quality appropriately.

In addition to the provider-level indicators, the PDIs also include five area-level indicators:

  • Asthma admission rate
  • Diabetes short-term complication rate
  • Gastroenteritis admission rate
  • Perforated appendix admission rate
  • Urinary tract infection admission rate

These indicators track potentially preventable hospitalizations and allow policymakers to target specific groups that appear to be developing more severe disease requiring hospitalization. Higher-than-anticipated rates may reflect poor access to care (e.g., from lack of insurance or too few primary care physicians), barriers to timely care (e.g., clinics that require daytime appointments), barriers to adherence to medical advice (e.g., language barriers), cultural influences that preclude seeking early treatment, or higher prevalence of poor health behaviors (e.g., smoking). Interventions may address any of these factors.

Area-level indicators are prone to bias due to cultural factors that may be outside of a health system’s control. For instance, an area with a high number of illegal immigrants may have patients presenting with more advanced disease, because patients delay seeking care for fear of deportation. In addition, factors such as smoking or obesity may be more prevalent in certain areas. Risk adjustment should include these factors, and an adjustment for socioeconomic status, as a proxy, and has been included in these PDIs. However, risk adjustment for socioeconomic groups may mask true differences in access to good quality care. For this reason, risk-adjusted rates should be considered alongside raw, unadjusted rates.

Current Uses of the AHRQ Quality Indicators

There are a number of uses of the AHRQ QIs, ranging from internal quality improvement to pay-for-performance (P4P) initiatives. (See Table 5 for a list of organization types and associated uses of the QIs.) Each use has certain caveats associated with it, but the AHRQ QIs are one set of many performance measures that can be used for these purposes. Although the QIs were not originally developed for hospital-specific comparative quality reporting, they have been and are being used for public reporting and P4P initiatives. When various users began to apply the AHRQ QIs for public reporting and other initiatives, AHRQ undertook an analysis to determine their appropriateness for these new uses. The Agency concluded that these measures can be used for these purposes, with certain understandings. This analysis resulted in a document that provided detailed information about the use of the QIs for hospital comparative reporting and P4P—Guidance for Using the AHRQ Quality Indicators for Hospital-Level Public Reporting or Payment,15 which is available on the Web at http://www.qualityindicators.ahrq.gov. This document is currently being updated to reflect the current state of the evidence of the AHRQ QIs in relation to public reporting and will include an evidence based reporting template that has been tested with the various stakeholder groups including consumers, providers, and others.

Table 5

Table 5

Users of AHRQ QIs

Decisions on how and whether to use the AHRQ QIs or any other measure set is a local matter and depends on various local issues such as data availability and data quality, legislative mandates, confidentiality issues and data use agreements, and resources, to name a few. AHRQ will continue to provide evidence that will inform and further clarify hospital-specific public reporting issues and other issues related to transparency.

Quality Improvement

Originally, the AHRQ QIs were designed as an internal quality improvement tool to assist hospitals to identify and target potential areas for interventions. The ability to track quality of care for a wide range of patients is an important consideration for quality improvement. Hospitals, health care systems, and hospital associations use the AHRQ QIs for internal quality improvement, specifically to initiate case finding, root-cause analyses, and cluster identification, as well as to evaluate the impact of local interventions and to monitor performance over time.

Yet, as with any quality measures, these indicators must be used with care, because the administrative data on which the measures are based are not collected for research purposes or for measuring quality of care, but for billing purposes. While these data are relatively inexpensive to collect, convenient to use, and represent a rich source of information that provides valuable insights, they are one view of the multi-dimensional concept of quality. Our health care system currently uses a “hybrid” model derived from multiple sources, both electronic and paper records, to result in performance information. While not the only use of administrative data, it can as a sole data source, be used by individual hospitals to launch investigations into reasons for identified quality problems. Further study may:

  • Reveal real quality problems for which quality improvement programs can be initiated.
  • Uncover problems in data collection that can be remedied through stepped-up efforts to code more diligently.
  • Determine that additional clinical information is required to understand the quality issues, beyond what can be obtained through billing data alone.

Overall, the AHRQ QIs are a valuable tool that takes advantage of readily available data to identify quality-of-care problems. Hospitals may use existing data to identify indicators with higher-than-expected rates, flagging potential quality concerns. These areas of concern may be investigated further to identify the underlying cause of the poorer-than-expected performance. In some cases, incorrect coding practices may be identified; in other cases, closer examination of system-level factors may be in order. Interventions may be devised to improve performance, and hospitals may track their own performance over time to identify areas for improvement.

Public Reporting and Pay for Performance

The AHRQ QIs are currently being used in several public reporting and pay for performance (P4P) initiatives at the national, State, and regional levels. At the national level, for example, they are used in tracking the quality of health care in the United States in the National Healthcare Quality Report16 and National Healthcare Disparities Report17 produced annually by AHRQ. These reports focus on four dimensions of quality—effectiveness, safety, timeliness, and patient centeredness—and are available on the AHRQ Web site. Other uses of the QIs include surveillance of trends over time at the State and community level as well as assistance in tracking disparities across areas, when the data are available.

Several organizations have incorporated the AHRQ QIs in reports on quality that allow for comparisons of individual hospitals. Organizations such as the Colorado Health and Hospital Association, the Texas Health Care Information Council, the Niagara Health Quality Coalition, and Norton Healthcare are a few. Many of these reports are Web based and are routinely updated. For a more complete list of organizations that use the AHRQ QIs for public reporting, see Table 6.

Table 6

Table 6

Organizations Using the AHRQ QIs for Public Reporting

Organizations such as the Centers for Medicare & Medicaid Services and Anthem Blue Cross and Blue Shield of Virginia have incorporated selected AHRQ QIs into P4P demonstration projects or similar initiatives. These projects reward providers for superior performance based on a combination of performance measures, including the AHRQ QIs. Results from the Centers for Medicare & Medicaid Services demonstration project indicate that tying payment to performance may provide some incentive to improve the quality of care.

There are a number of factors to be considered when using the AHRQ QIs for public reporting and payment purposes. Factors related to data source and measurements raise important issues such as:

  • Very low or low volume (small cell size) could impact patient confidentiality and also limit the ability to reliably identify quality differences.
  • Measures may not be applicable to the majority of hospitals or applicable only to hospitals with specific services (e.g., cardiac surgery, obstetrics).
  • Volume is a proxy measure; volume may be manipulated leading to concerns about appropriate utilization.
  • Potential confounding bias or the impact may be impaired by skewed distribution not completely eliminated by risk adjustment or carefully constructed operational definitions.
  • Benchmarking or the correct rate may not be clear.
  • Many procedures are currently done on an outpatient basis or observation status.
  • The indicator may require data not present in all administrative datasets, or risk adjustment may be inadequate when based only on data available from ICD-9-CM codes.
  • Coding may vary across hospitals; some hospitals code more thoroughly than others, making fair comparisons across hospitals difficult.

However even with these limitations, codes, coding systems, and coding practices are improving and are often subject to auditing or monitoring for accuracy. Coders are becoming more aware of the importance of properly coding the data and how they are used in relation to quality improvement, public reporting, P4P and other initiatives.

Ideally, in public reporting and P4P initiatives, the results of the performance measures should be made available to those hospitals participating, along with information on averages for peer groups, for the State, and for the Nation.

It is important when using not only the AHRQ QIs but all measures used for purposes such as comparative reporting, purchasing, or payment to continually assess and evaluate them and provide feedback to the measure developer for measure refinement and improvement purposes. The process of measure development and maintenance is constant, and measure developer like AHRQ welcome input from uses in an effort to continue to refine and enhance the measures.

Research

A number of the AHRQ QIs have been used in health care research projects. On the whole, researchers use the indicators because of the quality and level of detail of the AHRQ documentation of the QIs as well as the fact that these measures capture important aspects of clinical care40 (p. v). The AHRQ QIs, their documentation, and the related software reside in the public domain and are downloadable from the AHRQ Web site, free of charge. The QIs can be used with readily available administrative data, which researchers have ready access to in the form of HCUP. Further, researchers appreciate the fact that they can dissect indicator results and relate them back to individual records, which helps to gain a better understanding of the logic used in the measures, which, in turn, assists in distinguishing data quality issues from actual quality problems40 (p. vii). Topics of studies using the AHRQ QIs include an analysis examining the association between the Joint Commission accreditation scores and the AHRQ IQIs and PSIs,41 the effect of resident physician work hour limits on surgical patient safety,42 and the determination of whether persons with Alzheimer’s disease were at greater risk for in-hospital mortality than non-Alzheimer’s patients.43

Table 5, which is based on an environmental scan commissioned by AHRQ and completed by Hussey and colleagues,40 indicates that the AHRQ QIs are frequently used by researchers in their projects.

What Nurses Need To Know

Measuring performance is central to quality improvement because it provides information on current and past performance that can help guide future improvement efforts. In particular, performance measures can distinguish between good and substandard performance. Accordingly, the development and application of performance measurement is essential to improving the quality of care. It is one of the “first steps in the improvement process and involves the selection, definition, and application of performance indicators…”44 (p. 24). Performance measurement, while not the only influence, can act as a force to promote certain issues and agendas. Performance measurement conveys the message of importance. Specifically, what is important is measured, while what is not measured is considered less important.45 By focusing people and resources on a particular aspect of an industry, performance measurement can be a driver of change and reform.

Nurses are an integral member of the health care team and are in a unique position to detect quality-of-care issues, often providing avenues for change in processes that improve quality and safety in health care.46–48 The AHRQ QIs are one set of performance measures that provide information about the quality of care that nurses can use to plan and implement quality-improvement strategies. The climate for quality tracking, measurement, and reporting, and linking payment to quality, has changed dramatically in the past several years. The efforts by governments, accrediting bodies, large purchasers, employer coalitions, and others to track quality at the national, state-wide, and provider level; publish comparative quality reports; launch quality improvement efforts; and use public and private purchasing power to reward better quality have accelerated. Nurses not only are members of the quality team but often lead and coordinate efforts at the local levels that provide input into these efforts. Leaders of these quality efforts often consider using administrative data because they are readily available and inexpensive relative to other data sources. Data gleaned from the AHRQ QIs can be used to track trends, identify gaps in data measurements, and assist in redesigning organizational and workflow processes. Data provide a focus for improving health care quality, which can be used to make more informed decisions about policies within given facilities, communities, or regions. National and State benchmarks of the AHRQ QIs can be used to assess and compare an individual facility’s progress in a certain area. Nurses are well positioned to review performance data, interpret the results, provided additional followup as warranted, and design interventions to improve the quality of care within an organization.

There are significant challenges associated with applying administrative or clinical data sources, no matter how “good” the measure is, for which nurses should be aware of. Selecting measures and the purposes for which to use them should depend upon organizational or program needs. Implementation issues, including data availability and data quality, need to be addressed during the measure-selection process because the immediate goal is to produce usable information for quality improvement, public reporting, planning, and care redesign.

Data availability is an issue that must be addressed. Typical data sources include clinical data (e.g., medical record abstraction, laboratory data, pharmacy data, electronic medical record), administrative data (UB-04, billing, or claims data), survey data (e.g., patient experience with care, employee satisfaction), and operational data (e.g., licensure, ownership, staffing levels, type of staff). Each data source has its strengths and limitations. While clinical data is usually preferred by providers, it requires medical abstraction that is usually costly to collect. The primary benefit associated with the use of clinical data is the greater number of data elements that can be abstracted, resulting in enhanced measure definition, risk adjustment, and linkage to care processes. While there are efforts underway to expand and automate access to clinical data, automated data are not yet a reality.

Administrative data, on the other hand, are the most widely available source of information about hospital services, patient care, and patient outcomes. All hospitals generate administrative data as part of billing operations, and all payers have access to administrative data. These types of data have been shown to be useful in quality assessment and medical research, as well as for other measurement tasks such as screening for complications, identifying mortality rates, and tracking health system utilization. Like clinical data, administrative data also have limitations. Because administrative data are collected principally for billing and related administrative purposes, these data lack the depth of clinical detail that can be helpful in quality measurement; variations in coding practices may create challenges for quality evaluations; and there can be data validity issues. Since the concept of quality is multidimensional, a combination of measures derived from clinical and administrative data sources would offer a more complete picture of quality, at least in the immediate future.

Regardless of the data source, nurses involved in the quality improvement enterprise should be aware of several factors and consider them when tasked with designing and using a performance measurement system. The purpose of the measurement project should be clearly specified. Is it to drive quality improvement or public accountability? Inform consumer decisions? Pay for performance? Subsequent decisions will depend on the purpose of the measurement effort. Once the purpose has been established, the stakeholders of the project should be identified to assess expectations and to determine to what extent the available data and measures can meet their interests. In the planning stages of the project, providers who will be affected by or measured should be given the opportunity to understand the purpose of the project, why certain measures were chosen, and what will be done with the results. There should also be an opportunity to understand the methodology, including measure definitions, any risk adjustment used, and the calculation of the measures.

Audits for quality or similar mechanisms should be in place to assure accuracy and completeness. Data explorations should be completed and should focus on overall data quality and content, beginning with simple frequency distributions on key variables. Nurses and other providers reviewing the data should ask the following questions: If the program includes the objective to evaluate access or outcome by patient race, is the data element present for each case? Are data missing in a consistent manner? Is a selected procedure performed so infrequently in any single year that examining mortality rates would best be accomplished by combining data from several years? What comparative benchmarks are available? There are benchmarks at the national, regional, and peer-group levels from sources such as the National Healthcare Quality Report16 and National Healthcare Disparities Report,17 HCUPnet, and other State-level or hospital-system efforts. Finally, an evaluation component should be included as part of the initiative as it provides feedback, which can inform future decisions about the measurement project.

Nurses serve as an important member of the quality team, and often provide continuity from one phase of a project to another. Many nurses serve as part of leadership teams within their organizations and can provide valuable input into designing measurement strategies and quality improvement programs that improve the quality of care overall. Additionally, nurses are well positioned to not only analyze data from measures but also to design and implement strategies that impact care delivery. Many nurses coordinate activities among multidisciplinary teams, and organize interventions across departments which can ultimately result in improved quality of care for patients.

Enhancing the AHRQ Quality Indicators

Recently, the AHRQ QIs have undergone some changes based upon newly reported research, validation testing, the NQF endorsement process, input from several professional societies, and input from the QI user community. Based upon these activities, the AHRQ has revised the ICD-9-CM codes; incorporated the data element present on admission (POA) as a requirement for the calculation of selected measures; and added the ability to stratify certain measures such as delineating emergent cases from non-emergent cases for AAA Mortality Repair. In addition, the AHRQ has worked with other organizations to harmonize measures that are similar to the QIs, and as a result of these discussions, various coding changes have been incorporated into the numerator and denominator of selected measures.

The AHRQ also convened several expert panels to develop composite measures of the QIs. These discussions resulted in five composite measures: the PQI composite (PQI 17); Mortality for Selected Conditions (IQI 36); Mortality for Selected Procedures (IQI 35); Patient Safety for Selected Indicators (PSI 28); and Pediatric Patient Safety for Selected Indicators (PDI 19). The final report for each of these composites can be accessed from the AHRQ website.

AHRQ has also developed evidence-based reporting templates for the QIs. These templates were designed to report comparative performance data generated by the QIs to consumers and others. These templates are intended to report performance to consumers, but can be useful to other stakeholders in health care. The templates were tested by several focus groups that consisted of consumers, purchasers, providers and others. The first template uses the composite measures developed by AHRQ to report performance, while the second template groups the measures into health topics. Both templates are available, along with a sponsor guide are available on the AHRQ website.

Conclusion

The AHRQ QIs are one measure set, based on administrative data that can be used to evaluate the quality of clinical services. Most of the QIs focus on health care outcomes rather than rates of processes of care followed. The measures, their extensive documentation, and associated codes for SAS® and Windows® reside in the public domain and are available for download at no cost to the user. Furthermore, the QIs are maintained by AHRQ, which continues to refine and enhance them. Updates to the modules are done on a yearly basis and are routinely released in the first quarter of the year. AHRQ also provides technical support to users on a wide range of issues, including questions about the software package, clarifications of indicator definitions, theoretical questions on the indicators, and interpretation of performance results. The QI support team can be reached via e-mail at vog.qrha.srotacidniytilauq@troppus.

Future enhancements to the AHRQ QIs are underway and include the development of indicators specific to neonates, the development of additional indicators in areas such as hospital outpatient care, day surgery, diagnostic procedures, and emergency department care. Other planned improvements include incorporating additional clinical data elements such as lab values and do-not-resuscitate-order flag. Additional research is needed to develop evidence-based outcome measures that are sensitive to nursing practice.

“Quality of care is highly variable and delivered by a system that is too often poorly coordinated, driving up costs, and putting patients at risk.”49 (p. 1). Improving the access to and the performance of our health care system is a matter of national urgency.50 Yet, defining what quality is in health care is not easy. Quality is a complex, multidimensional concept that suggests different things to different people.47, 51 Consequently, competing views of quality should be balanced among patients, purchasers, managers, and health care professionals. A widely used definition of quality in health care is “the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge.”6

Regardless of how quality is defined, the only way to know whether the quality of health care is improving is to measure the performance of those that deliver it. Performance measures and performance measurement systems provide a tool to determine if quality exists. Currently there is a proliferation of performance measures, and development of these measures shows no sign of abating. While there is a plethora of measures in areas such as cardiac care, there is a dearth or complete lack of measures in other areas such as mental health and cancer care. Better coordination among measure developers is key to reducing the measurement burden of health care organizations. With the adoption of the electronic health record, performance measurement has the potential to become a by-product of care, instead of a distinct data-gathering activity.

The AHRQ QIs are one set of performance measures that cover a broad array of conditions and that use an inexpensive, readily available data source. While these measures do have certain limitations, these measures have been and are being used in a variety of initiatives that have contributed to improved quality of care within the United States.

Acknowledgments

As with any large, complex endeavor, the AHRQ Quality Indicators are supported by a team of researchers, clinicians, and others that work to constantly refine and improve the measures. There are a number of individuals that work on the QIs and contribute to their development and testing that should be acknowledged here, however, I would like to specifically acknowledge Jeffrey Geppert, J.D., of Battelle Memorial Institute; Patrick Romano, M.D., M.P.H., of the University of California; Sheryl M. Davies, M.A., and Kathryn M. McDonald, M.M., of Stanford University for their diligence and continued support of the program. I would also like to thank my colleagues at AHRQ for their suggestions and words of encouragement, but in particular, Mamatha Pancholi, the QI Project Officer, for her support and phenomenal focus over the past two years. I am honored and privileged to be part of an incredible team.

Footnotes

*

These are external causes of injury and poisoning that capture how the injury or poisoning happened, the intent, and the place where the event occurred.

These are supplementary classification codes that document factors influencing health status and contact with heath services, including such areas as health hazards related to communicable diseases, the need for isolation due to other potential health hazards and prophylactic measures, and persons with conditions influencing their health status, etc.

References

1.
Agency for Healthcare Research and Quality. Guide to the Prevention Quality Indictors. Rockville, MD: AHRQ; 2006. [Accessed December 2007]. http://www​.qualityindicators​.ahrq.gov/downloads​/pqi/pqi_guide_v31.pdf.
2.
Agency for Healthcare Research and Quality. Guide to Inpatient Quality Indicators. Rockville, MD: AHRQ; 2006. [Accessed September 2006]. http://www​.qualityindicators​.ahrq.gov/downloads​/iqi/iqi_guide_v30.pdf.
3.
Agency for Healthcare Research and Quality. Guide to the Patient Safety Indicators. Rockville, MD: AHRQ; 2006. [Accessed December 2007]. http://www​.qualityindicators​.ahrq.gov/downloads​/psi/psi_guide_v31.pdf.
4.
Agency for Healthcare Research and Quality. Measures of Pediatric Health Care Quality Based on Hospital Administrative Data: The Pediatric Quality Indicators. Rockville, MD: AHRQ; 2006. [Accessed December 2007]. http://www​.qualityindicators​.ahrq.gov/downloads​/pdi/pdi_measures_v31.pdf.
5.
Agency for Healthcare Research and Quality. Refinement of the HCUP quality indicators. Rockville, MD: AHRQ; Technical Review No. 4. (AHRQ publication No. 01-0035), 2001. [Accessed September 2006]. http://www​.qualityindicators​.ahrq.gov/documentation.htm.
6.
Institute of Medicine. Medicare: a strategy for quality assurance. 1 . Washington, DC: National Academy Press; 1990.
7.
Iezzoni LI, Shwartz M, Ash AS, Hughes JS, Daley J, Mackiernan YD. Using severity-adjusted stroke mortality rates to judge hospitals. Int J Qual Health Care. 1995;7(2):81–94. [PubMed: 7655814]
8.
Iezzoni LI, Shwartz M, Ash AS, Hughes JS, Mackiernan YD. Severity measurement methods and judging hospital death rates for pneumonia. Med Care. 1996;34( 1):11–28. [PubMed: 8551809]
9.
Iezzoni LI, Ash AS, Shwartz M, Landon BE, Mackiernan YD. Predicting in-hospital deaths from coronary artery bypass graft surgery: Do different severity measures give different predictions? Med Care. 1998;36(1):28–39. [PubMed: 9431329]
10.
Fitch K, Bernstein SJ, Aguilar MS, et al. The RAND/UCLA Appropriateness Method User’s Manual . Santa Monica, CA: RAND Corp; 2001. [Accessed March 2008]. http://www​.rand.org/health​/surveys_tools/appropriateness​.html.
11.
Donaldson MS, Lohr KN, editors. Health data in the information age: use, disclosure, and privacy . Washington, DC: National Academy Press; 1994. [PubMed: 25144051]
12.
Iezzoni LI. Data sources and implications: Administrative databases. In: Iezzoni LI, editor. Risk adjustment for measuring healthcare outcomes. 2nd ed. Chicago: Health Administration Press; 1997. pp. 169–242.
13.
Agency for Healthcare Research and Quality. Measures of patient safety based on hospital administrative data—The Patient Safety Indicators. Rockville, MD: AHRQ; 2002. [Accessed September 2006]. http://www​.qualityindicators​.ahrq.gov/documentation.html. [PubMed: 20734521]
14.
Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care. 1998;36(1):1–35. [PubMed: 9431328]
15.
Remus D, Fraser I. Guidance for using the AHRQ quality indicators for hospital-level public reporting or payment (AHRQ Pub No 04-0086-EF). Rockville, MD: Department of Health and Human Services, Agency for Healthcare Research and Quality; 2004. [Accessed August 2006]. http://www​.qualityindicators​.ahrq.gov/downloads​/technical/qi_guidance.pdf.
16.
Agency for Healthcare Research and Quality. National Healthcare Quality Report. Rockville, MD: Department of Health and Human Services; 2005. [Accessed August 2006]. http://www​.ahrq.gov/qual/nhqr05/nhqr05​.htm.
17.
Agency for Healthcare Research and Quality. National Healthcare Disparities Report. Rockville, MD: Department of Health and Human Services; 2005. [Accessed August 2006]. http://www​.ahrq.gov/qual/nhdr05/nhdr05​.htm.
18.
AFSCME Council 31. The high price of growth at resurrection health care: corporatization and the decline of quality of care. Nov, 2005. [Accessed January 2006]. http://www​.afscme31.org​/cmaextras/qualityofcare.pdf.
19.
California Office of Statewide Health Planning and Development. Consumer Information on Quality of Care. [Accessed September 2006]. http://www​.oshpd.ca.gov​/oshpdKEY/qualityofcare.htm.
20.
City of Chicago. Community Health Profiles . [Accessed September 2006]. http://www​.cchsd.org/cahealthprof.html.
21.
Colorado Health and Hospital Association. [Accessed November 2005]. http://www​.hospitalquality.org/index.php.
22.
Office of Health Care Access Databook. Preventable hospitalizations in Connecticut: assessing access to community health services. [Accessed November 2005]. FY 2000–2004 http://www​.ct.gov/ohca​/lib/ohca/publications​/acsc_databook00-04.pdf.
23.
24.
25.
Florida State Center for Health Statistics. [Accessed September 2006]. http://www​.floridacomparecare.gov/
26.
Georgia Partnership for Health & Accountability. The state of the health of Georgia, 2004: ambulatory care sensitive conditions. [Accessed November 2005]. http://www​.gha.org/pha​/publications/stateofthehealth​/2004/ACS112704.pdf.
27.
Massachusetts: Department of Health and Human Services; [Accessed September 2006]. www​.mass.gov/healthcareqc
28.
Missouri Department of Health and Senior Services. [Accessed September 2006]. http://www​.dhss.mo.gov​/HospitalSurgeryVolume/index.html.
29.
Niagara Health Quality Coalition and Alliance for Quality Health Care. [Accessed September 2006]. http://www​.myhealthfinder.com/
30.
Norton: Healthcare. [Accessed September 2006]. http://www​.nortonhealthcare​.com/about/qualityreport/index​.aspx.
31.
Ohio Department of Health. [Accessed February 2008]. http://www​.odh.ohio.gov​/healthstats/hlthserv​/hospitaldata/datahosp.aspx.
32.
Oregon Hospital Quality Indicators. [Accessed February 2008]. http://egov​.oregon.gov​/DAS/OHPPR/HQ/HospReports.shtml. (IQIs)
33.
Oregon Health Policy and Research. [Accessed February 2008]. http://www​.oregon.gov​/OHPPR/SNAC/docs/Edlund_6_21_05.ppt. (PQIs)
34.
Williams KA, Buechner JS. Health by numbers. Feb, 2004. [Accessed December 2005]. http://www​.health.ri​.gov/chic/statistics/hbn_feb2004.pdf.
35.
Texas Health Care Information Collection. [Accessed September 2006]. http://www​.dshs.state.tx.us/THCIC.
36.
Quality Counts. [Accessed September 2006]. http://www​.qualitycounts.org/
37.
Utah Department of Public Health PQIs. [Accessed September 2006]. http://ibis​.health.utah​.gov/indicator/index/alphabetical​.html.
38.
Utah Department of Public Health IQIs. [Accessed February 2008]. http://health​.utah.gov/hda/AHRQ2005.pdf.
39.
Vermont Department of Banking, Insurance, Securities & Health Care Administration. [Accessed September 2006]. http://www​.bishca.state​.vt.us/HcaDiv/HRAP_Act53​/HRC_BISHCAcomparison_2006​/BISHCA​_HRC_compar_menu_2006.htm.
40.
Hussey PS, Mattke S, Morse L, Ridgely MS. Evaluation of the use of the AHRQ and other quality indicators Prepared for the Agency for Healthcare Research and Quality. Santa Monica, CA: Rand Health; 2006. (Rand Report WR-426-HS)
41.
Miller MR, Pronovost P, Donithan M, et al. Relationship between performance measurement and accreditation: Implications for quality of care and patient safety. Amer J Med Qual. 2005;20(5):239–52. [PubMed: 16221832]
42.
Poulose BK, Ray WA, Arbogast PG, et al. Resident work hour limits and patient safety. Ann Surg. 2005;241:164–177. [PMC free article: PMC1357165] [PubMed: 15912034]
43.
Laditka JN, Laditka SB, Cornman CB. Evaluating hospital care for individuals with Alzheimer’s disease using inpatient quality indicators. Amer J Alzheimer’s Disease and other Dementias. 2005;20(1):27–36. [PubMed: 15751451]
44.
Fine T, Snyder L. What is the difference between performance measurement and benchmarking? Pub Manag. 1999;81(1):24–5.
45.
Waggoner DB, Neely AD, Kennerley MP. The forces that shape organizational performance measurement systems: An interdisciplinary review. Int J Production Econ. 1999;60/61:53–60.
46.
Aiken LH, et al. Hospital nurse staffing and patient mortality, nurse burnout, and job dissatisfaction. JAMA. 2002;288:1987–93. [PubMed: 12387650]
47.
Clarke SP, Aiken LH. Failure to rescue: needless deaths are prime examples of the need for more nurses at the bedside. American J Nurs. 2003;103:42–7. [PubMed: 12544057]
48.
Needleman J, et al. Nurse-staffing levels and the quality of care in hospitals. N Engl J Med. 2002;346:1715–22. [PubMed: 12037152]
49.
The Commonwealth Fund. Commission on a High Performance Health System, Why not the Best? Results from a National Scorecard on U.S. Health System Performance. Sep, 2006. [Accessed on February 3, 2008]. [Online] http://www​.commonwealthfund​.org/publications​/publications_show.htm?doc_id=401577.
50.
Lee JS. CRS report for Congress: Quality of care issues in Medicare reform. Washington, DC: Congressional Research Service; 1996.
51.
McGlynn EA. Six challenges in measuring the quality of health care. Health Aff. 1997;16(3):7–21. [PubMed: 9141316]

Views

Other titles in this collection

Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to PubMed

Similar articles in PubMed

See reviews...See all...

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...