U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Institute of Medicine (US) Committee on Quality Assurance and Accreditation Guidelines for Managed Behavioral Health Care; Edmunds M, Frank R, Hogan M, et al., editors. Managing Managed Care: Quality Improvement in Behavioral Health. Washington (DC): National Academies Press (US); 1997.

Cover of Managing Managed Care

Managing Managed Care: Quality Improvement in Behavioral Health.

Show details

6Process

In general terms, measurement of the quality of health care is driven by different forces in the private and public sectors. In the private sector, quality measurement is a reflection of the requirements of the accreditation process and, increasingly, is also a response to the demands of employers and other purchasers through contracting, report cards, and other means. In the public sector, performance measurement is the primary tool of accountability for spending public funds on health care (DHHS, 1995; IOM, 1989a).

This chapter begins with a general discussion of quality and accountability in the private sector, an overview of methods of quality improvement, and a comparison of current quality improvement methods in managed behavioral health care. Next is a discussion of performance measurement, model standards, and related developments in the public sector. The chapter then provides an overview of the accreditation process, including the development of standards and descriptions of five organizations currently in the accreditation industry. The chapter concludes with a discussion of the role of government in quality assurance.

QUALITY AND ACCOUNTABILITY

Background

Health care purchasers are caught in a dilemma created over the past 50 years and for which there is no easy resolution. Following World War II, the U.S. economy was strong and its industry dominated the world marketplace. Jobs were plentiful, and employers competed for skilled workers. The American ethic of a benevolent employer was firmly reinforced by years of unions' struggles with management and by a healthy economy under which employers could afford to offer generous health benefits.

For years, the health insurance contract offered ever-increasing benefits, freedom of choice, and first-dollar coverage (few copayments or deductibles). Employers trusted their employees and providers. Although consumers and providers struggled for many years to develop more adequate mental health and substance abuse benefits, most people were happy with the health care system. Furthermore, the U.S. Congress initiated the Community Mental Health Centers Act, Medicare, Medicaid, Hill-Burton, and other programs (see Chapter 3), which contributed greatly to the growth of the health care industry. With these investments, the public and private sectors created health care access and resources that were unparalleled in world history.

Fueled by scientific prowess and expanding financial commitments, the health care system appeared to have no limits in its potential capacities to provide health care. However, unlimited growth could not continue. With the rising costs of health care services threatening the financial stability of their budgets, private and public payers increasingly turned to methods that make health care accountable and affordable and that prevent cutbacks in previously reimbursed health benefits. The widespread initiation of utilization management, health maintenance organizations (HMOs), and other managed care methods during the past quarter century has emphasized cost accountability (IOM, 1989a).

These programs have cumulatively evolved into an industry and have become a strong force in the health care system. Consumers and providers who believe that autonomous health resource decisions on the basis of tradition and the health care contract are consequently in conflict with such policies. The tensions over cost controls have increasingly focused concerns about cost-containment efforts on quality issues such as the following:

  • qualifications of and consumers' geographic access to a comprehensive range of providers;
  • prevention of avoidable illness and provision of timely and focused treatment interventions;
  • availability of services, on the basis of urgency of need;
  • courtesy, convenience, and comfort of services;
  • compassion and kindness of care;
  • competence of providers to institute most appropriate evaluations and treatments, which would result in services that would result in the least risk to the patient and with the best health status outcome; and
  • administrative efficiencies of health care services that promote quality through effective communications, consumer and provider education, decision support, and quality management, treatment coordination, and other systems.

The interest in quality is reinforced by consumer demand and empowerment, professional ethics, legal and regulatory interpretation of citizens' rights, and attempts by businesses to satisfy and keep customers in a competitive health care marketplace. For public purchasers who are accountable for public funds, it is important to demonstrate that health care has good value and is worth the investment. The next section will give an overview of different methods for assessing quality.

Methods for Quality Assessment

Accreditation

One of the more traditional methods of quality assessment, accreditation of hospitals and managed care organizations, has evolved over the past 60 years to include highly specialized and involved accreditation of facilities, programs, and systems by numerous national accrediting entities, both voluntary and governmental. In addition, many managed behavioral health care organizations have developed “certification” methods based on various quality parameters and sources to establish the qualifications of various institutional and professional providers that are contracted into their networks. Managed care accreditation has become increasingly popular for public- and private-sector health programs because it is viewed as the best current system for creating accountability and quality, even though there is limited evidence to support the relationship between adherence to quality standards and improvements in patients' health status. Accreditation will be discussed more fully in a later section of this chapter.

Professional Review of Care

Review of care by peers or other qualified health professionals has been practiced extensively, especially in professional case conferences and for granting credentials and privileges. Peer review has become more institutionalized, detailed, and systematically applied in recent years with the evolution of the utilization management and quality assurance movements. Concerns by payers, courts, and facilities about the medical appropriateness of care have led to broader applications of professional review to prospectively, concurrently, and retrospectively validate clinical decisions made by clinicians for individual patient care and care for populations of patients.

Licensing

States have licensed physicians and nurses for many of the past 75 years through examinations and the recognition of professional training in accredited programs. Licensing has expanded substantially to other health care practitioners and has become more prescriptive regarding the scope of practice limits in many jurisdictions. In addition, it has been tied in recent years to continuing education requirements, proof of competence, and both sanctions and supervision in instances in which impairment is established. Licensing of facilities has likewise become a major state function, involving monitoring of numerous and varying requirements established by state legislatures and regulatory agencies.

Credentialing and Privileging

Health care programs provide risk and quality management through a number of approaches. They and accreditation organizations have established standards of practitioner competence based on such factors as training in accredited health professional programs, possession of a current state license, professional certification, demonstration of specific technical skills under expert supervision, evidence of liability coverage and acceptable prior malpractice experience, and attestation to the existence of no current health conditions that would expose patients to risks. Programs now commonly have dedicated resources to establish primary source verification of practitioners' qualifications, to conduct initial and ongoing peer review of practitioners' skills, and to restrict a clinician's practice and to report defined infractions to various state agencies and national data banks.

The complexities and multiple requirements imposed on providers to account to many agencies and managed care organizations and managed behavioral health care organizations has caused credentialing-privileging to become a costly and time-consuming enterprise for both organizations and individual practitioners. The evolution of integrated credentialing systems could substantially reduce these burdens and maintain protection for the public.

Physicians who have contracts with multiple organizations tell us that they can have as many as 20 or 30 reviews in a year, each of which looks at similar but just a little bit different criteria.

Linda Bresolin

American Medical Association

Public Workshop, April 18, 1996, Washington, DC

Auditing

A number of quality-focused activities have evolved from purchasers' needs to account for costs and regulators' needs to account for risks. The Health Care Financing Administration (HCFA) regularly conducts audits of the Medicare and Medicaid programs using both staff financial auditors and professional reviewers, including evaluations from state peer review organizations. Explicit survey standards and procedures are followed in these evaluations of agencies' and providers' statutory responsibility to provide services that are of acceptable cost, quality, and risk. Other agencies are substantially involved in developing standards affecting quality of care (e.g., the Substance Abuse and Mental Health Services Administration [SAMHSA] and the Agency for Health Care Policy and Research [AHCPR]) and in inspecting health care providers for compliance with quality-related requirements (e.g., the Occupational Safety and Health Administration).

In the private sector, a number of health benefits consulting firms have hired clinicians, including mental health professionals, to develop clinical services standards, auditing instruments and methods, and quality improvement programs for their customers, which include purchasers and provider organizations. Of any single institution, these consulting firms have collectively had one of the most profound and least publicized impacts on managed care. Their influence over the managed care purchasing decisions of health plans, through the promotion of their performance requirements, selection of managed care organization and managed behavioral health care organization vendors, and auditing of managed care operations, has been a major contributor to the development of monitoring standards and systems embraced by other organizations (e.g., American Managed Behavioral Healthcare Association [AMBHA] and the National Committee for Quality Assurance [NCQA]).

Courts

The legal system, guided by tort principles and case law, provides an uneven but sometimes effective means of regulation in situations in which a lack of attention to quality of care can result in risk or harm to patients. Legal mechanisms serve as an arbitrator and financial compensator in situations in which grievances or harm are established to be the result of neglect or malpractice by the health care provider. The substantial growth of risk management programs in health care plans, initially propelled by the need for liability control, has also been accentuated by their incorporation into quality improvement activities.

Clinical Practice Standards and Guidelines

The opportunities for high-quality clinical care are enhanced when providers follow steps in evaluation and treatment that have evolved over years through scientific research and clinical experience. Clinical texts by authoritative specialists and published articles in reputable peer-reviewed journals represent a traditional source of clinical standards. In recent years, expert consensus panels have proliferated to guide clinicians toward optimal decisions through their promulgation of specialized standards for a variety of conditions and medical technologies.

A variety of published and unpublished standards, criteria, guidelines, indicators, and protocols have flooded the landscape of health care, resulting in sometimes differing views about medical appropriateness by various expert panels. Nevertheless, empirically and experientially based clinical standards constitute an essential method by which clinical decisions can be independently evaluated through professional review and indicator-based measurements.

Consumer Satisfaction

Concern about the satisfaction of patients and patients' families with health services by providers or regulators was uncommon until recent years. The growing power of consumers in a competitive market economy has migrated from other areas of business to health care, underscoring the essential importance of routinely assessing what consumers think and feel about their health benefits and services. Health services research has shown that patient satisfaction is one of the most relevant markers for quality, even if it is not always a sensitive indicator. Significant resources are being allocated to refine specific methods of assessing quality through consumer evaluation and to systematically seek customers' opinions in designing clinical services and improving the quality of clinical services.

National and local newspapers and magazines provide consumers with information by comparing different health plans, including the results of consumer satisfaction surveys and other data available from report cards. The media also cover stories about provider “gag rules,” denials of services, problems with care, HMO profits, and other information that have unmeasured effects on disenrollment or other indications of dissatisfaction.

QUALITY MANAGEMENT IN BEHAVIORAL HEALTH CARE

Quality management activities in behavioral health care services have evolved over the past 30 years. They originated with the academic and professional bases of medical quality assurance (Mattson, 1992; Rodriguez, 1988), and have blended with traditional local practice (e.g., clinical privileging), state regulatory (e.g., licensing), and tort interventions to provide implicit and explicit oversight of health care quality.

One of the major initiatives in the accountability of behavioral health care quality was instituted by the U.S. Department of Defense in 1975 to provide explicit oversight over psychiatric residential treatment for child and adolescent services under the Civilian Health and Medical Program of the Uniformed Services (CHAMPUS). As noted in Chapter 4, this national initiative was the first by a national payer to establish specialized program standards and admission-treatment criteria for mental health services. Its evolution into a national peer review program (Rodriguez, 1985) for inpatient psychiatric and outpatient psychiatric and psychological services became the foundation for private health plans' rapid embrace in the early 1980s of commercial utilization and quality management programs.

In the latter part of the 1980s, indemnity health plan administrators realized that utilization management approaches such as retrospective and concurrent review had limited impacts on both costs and quality. Utilization management and employee assistance program vendors were encouraged to develop contracted networks of mental health providers to allow for mixed reimbursements and capitation of services and to better promote network-based quality management. In less than 10 years this phenomenon grew to the point that now more than 120 million people with insured or entitled behavioral health care benefits receive care in one of these managed care arrangements (HIAA, 1996).

Employers as Purchasers of Behavioral Health Care

Managed behavioral health care organizations have encouraged the documentation of efforts to account for quality of care and services. Xerox, IBM, GTE, and Digital Equipment Corporation have led the way in establishing quality specifications for their managed behavioral health care organization vendors. Through the imposition of contract guarantees, corporate purchasers reward quality and penalize poor service.

Employers are increasingly concerned about the quality of care that's being provided to their employees, and they want to gather more data on it. It's not self-reported through the health plans, so they need to look to groups that have the market power as well as the relationships with the health plans to gather that data collaboratively and in an audited format.

Catherine Brown

Pacific Business Group on Health

Public Workshop, May 17, 1996, Irvine, CA

Contract-based performance standards have become the base for industry and voluntary accreditation organization standards, notably those developed by AMBHA (1995) and NCQA (1996a, b). Some payers have developed their own explicit requirements for HMOs that provide care under their health benefits plan for such areas as member services and satisfaction, administrative services, organizational structure and philosophy, provider credentialing and performance monitoring, clinical services management, clinical delivery support systems, and confidentiality. Digital Equipment Corporation, for example, has specific requirements for the behavioral health services that it purchases:

  • benefit design,
  • access,
  • triage,
  • treatment approach,
  • case management,
  • alternative treatment settings,
  • outcomes measurement,
  • quality management, and
  • prevention and early intervention.

Table 6.1 compares some of the more widely used behavioral health care standards.

TABLE 6.1. Cross-Comparison of Managed Behavioral Health Care Performance Indicators.

TABLE 6.1

Cross-Comparison of Managed Behavioral Health Care Performance Indicators.

Trends in Quality Standards in the Private Sector

Because so many purchasers' efforts to become involved in prescribing methods and outcomes goals for quality accountability are in the early stages and because the state of population-based measurement systems is not refined, quality management in behavioral health and other clinical services is in the early stages but is evolving rapidly. As with most evolutions, an experimental phase precedes consensus about what constitutes the best approach.

In addition to the standards listed in Table 6.1, numerous employer coalitions, both local and national, are now embarked on efforts to establish performance requirements for managed care. Examples include the Managed Health Care Association, the Employer Consortium, the National HMO Purchasing Coalition, the Minnesota Buyers Healthcare Action Group, and the Pacific Business Group on Health. Many of these coalitions have significant participation by health services consumers and their representatives, such as unions, advocates, organizations, and insurance commission agencies.

The Foundation for Accountability (FACCT), representing a broad coalition of public and private purchasers and others, has begun to develop and test tools that will allow documentation of population-specific functioning, quality of life, satisfaction with services, and risk reduction for a number of medical conditions commonly seen in health plans, such as diabetes, asthma, breast cancer, coronary artery disease, and low back pain (FACCT, 1995). Mood and anxiety disorders represent other conditions whose prevalence and direct and indirect costs and for which there are problems in the quality of evaluation and treatment are of great concern.

FACCT evolved during 1995 because of the interests of several major private and public purchasers, as well as consumer groups, for public-measurement systems that account for outcomes related to quality such as patient satisfaction, health-related quality of life, and functional status. Their wish to expedite the development of outcomes and methodologies and systems was spurred by Paul Ellwood's long-standing promotion of the national goal of a patient-centered and integrated outcomes management system. To date, FACCT has released measurement methods for a number of conditions (e.g., diabetes, asthma, and breast cancer) and is planning the development of similar tools and pilot programs for behavioral health conditions such as depression. The application and evolution of FACCT methods will be influenced by the amount of funding that is available and how meaningful the collected information will be to consumers and purchasers.

Many purchasers are now prodding their contracted managed care organizations and a few are requiring their contracted managed care organizations to collect and publicly report their Health Plan Employer Data and Information Set (HEDIS) results. This and other public report cards constitute a major trend in health care and are being actively supported by consumers who want meaningful data on which they can make personal health care and health plan selection decisions. Managed care organizations are concerned about the risk adjustment problems with some measures, the cost of collecting data, the high-stakes business risks that can follow questionable performance, and the plethora of reporting requirements that are being imposed under multiple reporting systems. The evolution of other potentially large systems, such as the Joint Commission on Accreditation of Healthcare Organizations (JCAHO), NCQA, Utilization Review Accreditation Commission (URAC), and Council on Accreditation of Services for Families and Children (COA), adds to their concerns about their abilities to simultaneously meet the market's demands for improved accountability and lower premiums.

From these early efforts to establish quality standards and tools that can be used to measure quality, several views are emerging:

  • Standards and measures for quality-related components of structure (e.g., state licensure and national accreditation), process (e.g., provider adherence to clinical policies), and outcomes (e.g., level of functioning and patient satisfaction with clinical care) are relevant to conclusions about quality.
  • Routine and consistent measurement of specific health conditions and illnesses should be conducted for individuals in a health plan and for the population.
  • Risk adjustments, based on individual and population variables, are critical in reaching conclusions about the process and outcomes of health care.
  • Health status (physical functioning, role capacities, and objective and subjective well-being) should be consistently evaluated in determining the effectiveness of health care interventions.
  • Generic population and disease-specific health measures are relevant to the management of a population's health.
  • Plans need to evaluate systematically individual and population health risk behaviors in developing targeted interventions that could reduce avoidable health costs and increase the likelihood of positive health status over time.
  • Accreditation, licensing, quality auditing, performance monitoring, and other accountability mechanisms have limited impacts when they are instituted in a piecemeal or uncoordinated fashion. For example, external oversight processes may adequately monitor the overall quality of care, but oversight tends only to identify problems rather than to help solve them, especially when the solutions may involve changes in the internal procedures of an organization.
  • Quality of care requires cooperative commitments to quality-related goals by payers, practitioners, consumers, regulators, and managed care organizations, as well as a common and practical system for measuring and analyzing quality-related information.

Purchasers share with responsible managed care organizations and consumers a unifying goal of creating a more responsive health care delivery system, that is, one that is both more efficient and more effective. Over time it is probable that a best practices system will emerge that monitors, measures, and reports on the relevant information needed to determine effectiveness in sensitive, reliable, specific, and valid terms. The process of developing best practices will be facilitated if purchasers and managed care organizations include a variety of stakeholders in the discussions, including practitioners, administrators, researchers, accreditation organizations, public agencies, and the general public.

The quest for best practices and affordable systems is one of the current megatrends in health care, spawning a new industry that may provide the tools that stakeholders in the U.S. health care system need to make quality-based decisions. During the next few years of systems experimentation and consensus development, quality-related accountability will continue to develop in a variety of ways and will require leadership (Ellwood, 1988). It is now unclear by what means payers, providers, consumers, managed care organizations, and other stakeholders will come together to develop consensus about the systems that promote gains in personal health and the public good. Leadership will be needed to guide each step in this development and consensus-building process.

PERFORMANCE MEASUREMENT IN THE PUBLIC SECTOR

Performance measures are used to monitor progress made by agencies in reaching public health goals. Information from performance measures sometimes is used by agency administrators to justify the use of public funds. In addition, the information infrastructure needs to be developed to support state and local performance measurement systems and to standardize information across agencies, making it easier to aggregate and to analyze trends at the state level and then at the national level.

Standardization of information is a current priority of the U.S. Department of Health and Human Services. In addition to this report, the DHHS is sponsoring two other studies relevant to performance measurement. One study is being conducted by the National Research Council at the request of the Office of the Assistant Secretary for Health, to examine the technical issues involved in adopting performance measures in mental health and substance abuse, as well as other areas (human immunodeficiency virus infection, sexually transmitted diseases, tuberculosis, chronic disease, immunization, prevention of disabilities among children, rape prevention, and emergency medical services). That study will make recommendations for the specific performance measures that should be used over the next 3 to 5 years. DHHS also asked the Institute of Medicine (IOM) to convene the Committee on Using Performance Monitoring to Improve Community Health. That committee will develop prototypical sets of indicators for use by communities in monitoring the performance of public health agencies and personal health care services. One of the indicator sets addresses depression. The committees performing both studies are scheduled to issue reports in 1997.

The present committee is aware that many other efforts to develop performance measures are being undertaken by the states and cannot be addressed in this report. The section that follows will describe key efforts and lead agencies in the development of performance measures at the federal level.

Healthy People 2000

In 1979, the U.S. Public Health Service (PHS) initiated a process of setting objectives and measurable targets for health promotion and disease prevention. Now known as Healthy People 2000, this U.S. Public Health Service process adapted the private-sector “management by objectives” approach and set objectives for improvements in health status, risk reduction, public and professional awareness of prevention, health services, and protective measures. There are now 300 objectives in 22 priority areas addressing health promotion, health protection, preventive services, and data systems (DHHS, 1995).

The development of Healthy People 2000 was directed by the Office of Disease Prevention and Health Promotion, under the leadership of J. Michael McGinnis, in response to a Congressional mandate. The national objectives were developed by 22 expert working groups, a consortium of more than 300 national membership organizations, 56 state and territorial health departments, the IOM, and public review and comment involving more than 10,000 individuals (DHHS, 1990, 1992).

Dissemination of these objectives has been widespread in the public health community through coordination with the American Public Health Association, the Association of State and Territorial Health Officials, the National Association of County and City Health Officials, and other groups. Regular reports describing the national progress toward meeting the objectives are issued, and there is general agreement that the objectives encourage the systematic measurement of needs, the setting of targets, the monitoring of the progress that has been made, and the evaluation of outcomes in public health (IOM, 1989a).

Substance Abuse and Mental Health Services Administration

Managed Care Initiative

SAMHSA is the lead federal agency for behavioral health care. SAMHSA's three centers (Center for Substance Abuse Treatment, Center for Mental Health Services, and Center for Substance Abuse Prevention) work in partnership with the states to improve prevention, treatment, and rehabilitation services for individuals with mental illness and substance abuse disorders.

In April 1995, SAMHSA began a managed care initiative to assist administrators and providers in adapting to the national shift to managed care delivery systems. Through its three centers, SAMHSA is supporting the development of activities that will help to consolidate information on managed care with regard to individuals with serious mental illness and chronic substance abuse disorders. SAMHSA also provides technical assistance to states and local providers concerning managed behavioral health care systems, including the development of performance-monitoring systems that include consumers and families, and the negotiation and management of contracts with managed behavioral health care organizations.

Mental Health Statistics Improvement Program

Public mental health programs have recently experienced several transformations concurrently with the development of managed behavioral health care systems. One of the most important is the emergence of the mental health consumer movement, with an increasing emphasis on consumer satisfaction with the quality of care and on the assessment of quality of life and other outcomes of mental health treatment.

In 1993, the Mental Health Statistics Improvement Program of the Center for Mental Health Services initiated the first phase of its report card project. The effort was a reflection of the emphasis on consumer choice during the Clinton Administration's health care reform proposals, and it converged with the growing mental health consumer movement to develop a collaborative effort with many stakeholders, including mental health consumers, state agency directors, and researchers.

The first phase of the report card project identified the domains and general concerns of stakeholders. The second phase involved the development of indicators for these domains, and in the spring of 1996, a report describing measurable indicators for the domains of access, appropriateness, outcomes, satisfaction, and prevention was released. In the summer of 1996, the Center for Mental Health Services announced that it will provide funding for field testing in up to 25 states, in which a combination of administrative data, clinical information such as medical records, and consumer self-reports will be used to prepare the report cards (CMHS, 1995; Cody, 1996).

The report card is a consumer-oriented prototype. It was designed to help mental health consumers, advocates, health care purchasers, providers, and state mental health agencies compare and evaluate mental health services in the areas of access, appropriateness, outcomes, and prevention. It is unique in the field because of the involvement of consumers in every stage of its development, the focus on serious mental illness, and the emphasis on outcomes in mental health treatment.

Health Care Financing Administration

HCFA, which administers the Medicare and Medicaid programs, is the largest single purchaser of managed care in the United States and currently provides direct or financial support to 15.5 million people (Valdez, 1996). As of June 30, 1996, about 11.5 million Medicaid beneficiaries were enrolled in managed care programs, representing about a 140 percent increase in managed care enrollment since 1993. Currently, 10 percent of the Medicare population is enrolled in about 278 managed health care plans across the country, representing about a 67 percent increase since 1993 (Valdez, 1996).

In the area of quality assurance (referring to plan structure and processes), in 1991 HCFA began its Quality Assurance Reform Initiative, which was designed to monitor and improve the quality of managed care for Medicaid recipients. The initiative has developed a guide that includes specific criteria for the design of internal quality assurance programs by managed care plans. Over the long term, HCFA plans to move toward a single set of quality assurance standards for both Medicaid and Medicare beneficiaries within managed care environments (Valdez, 1996).

In the area of performance measures, HCFA worked with NCQA in the development of the Medicaid version of HEDIS and has included Medicaid HEDIS in its Quality Improvement Primer, which has been developed for state Medicaid agencies. In addition, HCFA has worked with NCQA to develop HEDIS 3.0 and by 1997 will require health plans serving the Medicare population to use some of the HEDIS measures. HCFA also is working with FACCT to develop outcome measures for plan performance.

Agency for Health Care Policy and Research

The Agency for Health Care Policy and Research (AHCPR), a U.S. Department of Health and Human Services agency, was established in 1989 with a Congressional mandate to generate and disseminate information that would be useful to consumers, practitioners, and other audiences. The majority of AHCPR's activities are aimed at improving the quality of health care. Accordingly, the agency works with several organizations, including the Foundation for Accountability, the Joint Commission on Accreditation of Healthcare Organizations, the National Committee for Quality Assurance, and the American Medical Association, to help to provide a science base for quality measurement, and to assist in translating research findings to quality measures.

Through its Center for Quality Measurement and Improvement, AHCPR conducts and supports research on the measurement and improvement of the quality of health care, including consumer surveys and satisfaction with health care services and systems. The agency has produced and disseminated clinical practice guidelines in a variety of formats to meet the needs of health care practitioners, the scientific community, educators, and consumers. A clinical practice guideline on the detection, diagnosis, and treatment of depression was released in 1993.

AHCPR sponsors a Computerized Needs-Oriented Quality Measurement Evaluation System (CONQUEST), a prototype system for collecting and evaluating clinical performance measures. The system includes two linked data bases, one on conditions and one on measures, to help clinicians, providers, managed care organizations, and purchasers find clinical performance measures that match their needs. Information is included on approximately 1,200 measures developed by public- and private-sector organizations. Among these are measures for the following conditions: affective disorder, alcohol abuse, bipolar disorder, depression, drug abuse, dysthymic disorder, panic disorder, suicidal ideation, and suicide (mortality).

AHCPR has supported research on the implementation of guidelines in a variety of settings, including HMOs and group practice, and examining a variety of strategies, including incentives and individualized feedback. Other areas of AHCPR-supported research include factors that affect costs, premiums, and choice of health plans; clinical and effectiveness research in HMOs; and managed care in rural areas.

ACCREDITATION

Ideally, accreditation is a process that surveys health care delivery organizations to determine whether the services provided have met a set of recognized standards for that domain. During the last ten years, accreditation has become an important vehicle to review and monitor the inner structure of organizations that deliver health care. As discussed in other parts of this report, a growing trend among many state licensure/certification boards is to require accreditation from JCAHO, NCQA, the Rehabilitation Accreditation Commission (CARF), COA, or URAC to become licensed or certified in that state. The domains of the various accreditation agencies are different but sometimes overlap.

Accreditation Organizations

The committee reviewed accreditation materials from five organizations that accredit behavioral health plans, programs, and services: CARF, COA, JCAHO, NCQA, and URAC. Representatives of these organizations were invited to make presentations at the committee's two public workshops. This section briefly describes each of the organizations, which are further compared in Table 6.2.

TABLE 6.2. Cross-Comparison of Selected Accreditation.

TABLE 6.2

Cross-Comparison of Selected Accreditation.

The Rehabilitation Accreditation Commission (formerly the Commission on Accreditation of Rehabilitation Facilities) (CARF)

CARF accredits programs that serve individuals with disabilities and others who need rehabilitation. The organization was developed in 1966 through efforts of the American Rehabilitation Association and the Association of Sheltered Workshops. In CARF's first 2 years, it received administrative support from JCAHO, and the two organizations are developing a “recognition initiative” that eventually will recognize the other's accreditation standards and thus eliminate the need for dual accreditation.

CARF currently accredits more than 11,000 programs in the United States and Canada, including alcohol and drug programs, mental health programs, and community-based rehabilitation programs that are primarily designed for the chronically and persistently mentally ill. CARF has a consumer-centered philosophy that actively encourages consumer involvement in assessing community needs, planning services, participating in governance activities, and collaborating in the development of individual treatment plans. CARF also requires that programs have a plan to reduce barriers to care, including cultural, architectural, attitudinal, and other barriers (Slaven, 1996).

Council on Accreditation of Services for Families and Children (COA)

COA was founded in 1977 and currently accredits about 1,000 behavioral health programs and 3,000 social service programs in the United States and Canada. COA has developed standards for more than 50 services, including outpatient mental health and substance abuse services, day treatment, foster care and day care for children, services for persons with developmental disabilities, services for victims of domestic violence, adoption services, vocational and employment services, and others.

COA has developed a set of core standards that apply to all organizations that it accredits, such as financial management, quality assurance, and record keeping, as well as service-specific standards, such as foster care, residential care, and so on. The behavioral health accreditation overlaps somewhat with those of CARF and JCAHO, but most of the other services are not addressed by any other accreditation organization. Also, in contrast to the other accreditation organizations, the programs accredited by COA are largely community-based programs more closely related to a social services than to a medical model of treatment.

Joint Commission on Accreditation of Healthcare Organizations (JCAHO)

JCAHO is the oldest and largest of the accreditation organizations. In 1951, the Joint Commission on Accreditation of Hospitals (JCAH) was formed in cooperation with the American College of Surgeons, the American College of Physicians, the American Medical Association, and the Canadian Medical Association. The new organization formalized hospital standards that had been under development since the 1920s and 1930s by the American College of Surgeons. In the 1970s, JCAH began to develop additional accreditation programs for psychiatric facilities, substance abuse programs, community mental health programs, and ambulatory care facilities. In 1987 the name was changed to JCAHO to reflect the new activities and to anticipate a new activity, accreditation of managed care organizations (SAIC, 1995)

Accreditation of HMOs is now a relatively small proportion of JCAHO's accreditation activities. JCAHO, however, has accreditation guidelines for networks, including independent practice associations, integrated health care delivery systems, HMOs, managed care organizations, physician-hospital organizations, preferred provider organizations, provider-sponsored networks, and specialty service systems (JCAHO, 1996). Another set of accreditation guidelines address mental health, chemical dependency, and mental retardation/developmental disabilities services.

National Committee for Quality Assurance (NCQA)

NCQA was formed in 1979 by two managed care associations, the Group Health Association of America and the American Managed Care and Review Association (now merged and renamed the American Association of Health Plans). The original purpose of NCQA was to perform quality care reviews for a former federal agency, the Office of Health Maintenance Organizations. From the beginning, NCQA established collaborative relationships with industry, including large employers such as Xerox and GTE, insurers, such as Prudential, and managed care plans, such as Harvard Community Health Plan (now Harvard Pilgrim Health Plan) and Kaiser Permanente.

In 1989, with a grant from the Robert Wood Johnson Foundation, NCQA began to develop a performance monitoring system now known as HEDIS. The first version, known as HEDIS 1.0, was released in 1991, HEDIS 2.0 was released in 1993, and HEDIS 3.0 was released in the summer of 1996. NCQA has worked in collaboration with HCFA to develop a Medicaid version of HEDIS, which was released in July 1995, and in the spring of 1996 NCQA released a set of behavioral health performance measures for testing, based in part on the performance measurement system (PERMS) developed by the American Managed Behavioral Healthcare Association and on other sources.

Now in its third evolution, HEDIS 3.0 is a voluntary reporting set of managed care quality measures that have evolved over the past 5 years under the aegis of NCQA but with inputs from broad range of experts from a variety of public and private organizations. Although only one specific behavioral health measure has been part of earlier reporting sets (ambulatory follow-up after hospitalization for major affective disorder), a number of other measures have recently been proposed as a test set that will promote the refinement of these measures over time and the possible evolution of some measures toward the next HEDIS reporting set. Although HEDIS data collection is not required for NCQA accreditation, managed care organizations regularly institute HEDIS measures.

Utilization Review Accreditation Commission

URAC was formed in 1990 after a series of meetings with the American Managed Care and Review Association and utilization review industry representatives indicated that there was a need for standards for utilization review and an independent accreditation organization. URAC currently accredits the utilization and quality management systems of 150 managed care programs that provide services for more than 120 million individuals. URAC also works closely with state regulators to address managed care regulatory issues, and nine states deem URAC accreditation in lieu of licensure. URAC has implemented a Network Accreditation Program and will be implementing a Workers' Compensation Utilization Management Accreditation program.

Changing Environment of Accreditation

There has been a proliferation and growth of accreditation organizations to match the structural changes in the industry. As described above, new accreditation organizations form to review any structure devised in managed care. Some organizations are unique, whereas others overlap in their accreditation domains but have a slightly different focus. For example, JCAHO and NCQA both accredit HMOs. JCAHO's accreditation process focuses on a staff model delivery system, whereas NCQA's process is focused on the HMO structure. Also, NCQA standards tend to focus at the highest level of an organization, whereas JCAHO, CARF, and COA are geared more toward particular programs or facilities that may be a division of a larger organization or may be free-standing or independent.

Currently, major purchasers of care may require accreditation for a health care delivery system to be eligible for contracts. In addition, many state insurance boards and employer groups have mandated that HMOs have NCQA or URAC accreditation to operate in their state or to be offered to employees, respectively. However, because of the complexities of health care structures, mandatory accreditation can impose a tremendous burden. Accreditation requirements often overlap in national managed care companies or health care delivery systems that perform multiple functions (e.g., a staff model HMO that is also a provider network). Many times, organizations must obtain more than one type of accreditation to satisfy employers, states, and other stakeholders. A behavioral health care carve-out company, for example, may operate in a state that requires URAC accreditation, whereas a multi-state employer group operating within that state may require NCQA accreditation.

The costs of achieving accreditation are also burdensome. The actual cost of the accreditation survey is only part of the burden. The personnel costs and time involved in preparing for accreditation also can also be extensive, and may be prohibitive for smaller organizations. For community-based organizations, the cost of COA accreditation is done on a sliding scale, with a graduated scale based on total agency revenue (COA, 1996a, b).

Cost certainly makes accreditation prohibitive for many small organizations; it also makes the issue of multiple accreditations unrealistic, despite the demands of states and employer groups. Thus, questions are raised about the utility and validity of accreditation. The accreditation industry is faced with pressure to focus its standards on the relevant issues, collaborate with similar organizations, and consolidate the multitude of accreditation standards to reduce overlap and redundancy.

The Accreditation Process

The accreditation process entails generating standards and then comparing the actual delivery of care with the standards. There are at least seven distinct steps:

1.

Measures of performance, also known as parameters, are identified and recommended as standards.

2.

A process of review leads to acceptance of the standards.

3.

The standard is generally tested internally (“alpha” tested) and then tested on a external site (“beta” tested).

4.

After testing, the standards are incorporated into a review process.

5.

Organizations desiring to be accredited apply to be surveyed.

6.

A site review is performed by peer surveyors who examine the inner workings of the organizations against the standards.

7.

Finally, a process of scoring is developed to determine the organization's degree of compliance with each standard and whether the aggregated results reached the threshold for granting accreditation.

These steps are described in the following section.

A standard, according to Donabedian (1982), is a professionally developed expression of the range of acceptable variation from the norm. A standard has also been defined as the desirable and achievable (rather than the observed) performance or value with regard to a given parameter (Slee, 1974).

A parameter is an objective, definable, and measurable characteristic of the process or the outcome of care (e.g., access to behavioral health care within 5 days of a request in a nonurgent situation). Each parameter has a scale of possible values. For example, a geographic access parameter might require outpatient mental health services to be available within 30 minutes of a consumer's home or workplace. Variables would include, for example, traffic patterns in a busy urban setting where traveling 5 miles could take 1 hour during rush hour. Another variation might be in a rural setting, where there is a scarcity of consumers and services and travel time may be longer because of distance.

The development of the current accreditation standards is based on professional consensus. The extent and diversity of opinions into the consensus process vary from agency to agency, as well as from standard to standard. Some agencies use a wide range of experts and elicit public participation, whereas others may use a closed panel of experts and a board review-editing procedure to develop a standard. The scope and relevancy of the standards by this process are dependent on the input and consensus from all the affected parties. The process of standardizing different views from participants in the development of a standard is not clearly outlined in the public information provided by CARF, COA, JCAHO, NCQA, or URAC.

Unless the accreditation process incorporates principles of quality in establishing standards and the survey process, there is a danger of inconsistency, variance, and unreliability. There are many opportunities in the accreditation process for variance in measures, interpretations, and dispositions, leading to disparate outcomes. The process of accreditation is heavily dependent on the strength of the standard as described, the surveyor's interpretation of the standard, and the applicability of the standard to a real situation.

Accreditation standards written with admirable intentions may not lead to consistent interpretation and/or applicability to the real world. For example, COA has included a standard to define the scope of an agency's mission. It states that the primary purpose of an agency is to provide services to meet the needs of the community for protection, maintenance, strengthening, or enhancement of individual and family life and social and psychological functioning (COA, 1996a). This standard demonstrates responsible intentions but is subject to much variation in interpretation by reviewers and variability in what is considered to be the supporting evidence. During the training of surveyors, it would be important to outline the different variables in this standard. Reviewers should be familiar with the variations to the standard so they are able to assess during a review whether the nuances of the agency comply within the boundaries of the standard.

Therefore, the accreditation label is only as good as the process of accreditation, from the development of the standards through the process of scoring. It is important in the accreditation process that:

1.

Standards are developed through a rigorous process of extensive peer consensus, a review of scientific evidence when applicable, and reevaluations of normative data to determine the true range of acceptable variations.

2.

Standards are objective, measurable parameters specific enough to minimize variations in interpretations by reviewers and the public.

3.

Standards are reviewed for their relevance and importance to the goal of accreditation and the integrative needs of the public.

4.

The validity and reliability of a standard must be known and reflective in the scoring, such that those standards with much variability are given less value than those for which there is stronger consensus.

5.

The implementation process is updated frequently, and there is a clear and recurrent process for establishing inter-rater reliability among reviewers.

6.

The final accreditation dispositions are compared with (trended against) acceptable parameters, that is, informed public perception, as a strong indicator of the competency of the accreditation process.

INFORMATION INFRASTRUCTURE FOR QUALITY MEASUREMENT

Administrative data sets are frequently a basis for quality-of-care assessments and are used in systems such as HEDIS 3.0 (NCQA, 1996a) and Performance-Based Measures for Managed Behavioral Healthcare Program (AMBHA, 1995). The data sets include claims data, records on visits and procedures, and, with the introduction of computerization, medical records. These information systems generally include relatively large pools of individuals and therefore permit analyses of specific practitioners and facilities (profiling), examinations of selected conditions and diagnoses, and changes in patient status over time. Because the data are collected for ongoing management functions (e.g., billing), they provide a relatively inexpensive source of information.

Unfortunately, the value of the data sets for assessments of quality are limited because they are designed for management functions like billing and claims payment and may not include sufficient detail to facilitate analyses of quality of care (Garnick et al., 1994). Garnick et al. (1994) have noted that quality-of-care assessments require information on the utilization of care (e.g., visits, services, procedures, site of service, diagnoses, and outcomes), patient characteristics (e.g., age, gender, race, and employment status), and health plan descriptors (e.g., benefit structure, copayments). Many systems, however, do not include all utilization information and may not contain detail on the services provided. Plans with high deductibles and/or copayments may not record service utilization if it does not exceed the deductible, and high copayments may discourage individuals from seeking care. Plans may also fail to record the use of services when utilization exceeds the maximum benefit either because the individual seeks services outside the plan or the plan does not track self-paid services. Claims data are often insufficient to identify specific service dates (especially when multiple services are provided within a short period of time), procedure codes may not reflect the actual services provided, and diagnostic codes may be inaccurate or incomplete. Finally, commercial data sets often include limited information on patient characteristics and may not provide accurate information on the numbers of individuals enrolled at a point in time. Public-sector data sets often have more patient information because public policy requires the tracking of the services provided by age, race, and gender.

These limitations are particularly problematic in the assessment of the quality of behavioral health care. Out-of-plan utilization is a major source of potential bias. The benefits for the substance abuse and mental health care and services provided within a plan are limited. Individuals with such problems often require more than the benefits offer, and turn to publicly-funded programs for additional care. Thus, a review of the services provided to an individual may suggest that he or she received one episode of care from the health plan for a short duration and no readmissions. If it were known, however, that the individual had received additional public services, the assessment of that plan's quality might change substantially. Moreover, procedure codes for ambulatory services may not differentiate mental health and substance abuse care (NCQA, 1996a, b). For example, new admissions for mental health problems can be misinterpreted as readmissions for substance abuse if there had been an earlier substance abuse treatment episode. Finally, the lack of data on patient characteristics means case mix adjustments may not be feasible and makes it difficult to assess biases in patterns of care and the need for culturally and gender-specific services.

Despite these limitations, administrative data sets are an efficient and important source of information for the assessment of quality of services. Program managers, program evaluators, and consumers, however, must be aware of the potential problems and biases and include an assessment of a data set's limitations in the analyses of services and the conclusions about quality. It is also critical to assess the potential for combining information from commercial and public administrative data systems so that the nature and extent of out-of-plan utilization can be assessed and added to the evaluation of the quality of care.

If you give information to providers and you work with information systems with the goal of providing information in real time, then quality assurance initiatives can be transformed from an external administrative burden into a powerful tool for improving clinical practice and increasing efficiency.

Geoffrey Reed

American Psychological Association

Public Workshop, April 18, 1996, Washington, DC

ROLE OF GOVERNMENT IN QUALITY ASSURANCE

Historically, the federal government's involvement in quality review and accreditation has been indirect. For example, in the area of hospital accreditation, the federal government has typically given an accreditation organization such as JCAHO deemed status. This means that the federal government makes use of the information collected by JCAHO and relies on JCAHO's judgments regarding the quality of hospitals in setting eligibility rules for reimbursement by Medicare.

States also are beginning to review and update traditional regulatory and contracting practices and to develop arrangements for deemed status. For example, COA holds deemed status in 22 states that recognize the COA accreditation process in lieu of Medicaid certification, state monitoring, or licensing (COA, 1996c).

Similar sets of arrangements already are in force in other markets. For example, the American Society for Testing and Materials holds deemed status in judging the quality of building materials. Table 6.3 displays a variety of consumer protection models for comparison.

TABLE 6.3. Selected Regulatory and Consumer Protection Models.

TABLE 6.3

Selected Regulatory and Consumer Protection Models.

Deeming can be a powerful tool, especially when the market for accreditation and measurement appears to be rather competitive. The federal or state government would grant deemed status to all organizations meeting standards of measurement and standard setting. For example, the federal or state government may decide that it will grant deemed status to any group that provides measures and standards across a specified range of domains. This would create an incentive for health plans and other organizations to develop quality measures and for accreditation organizations to measure a range of domains that extend beyond what any subset of interest groups might propose.

Achieving deemed status could also require that measurements be uniformly defined and collected by third parties. Under such arrangements, the federal government's influence would stem primarily from its role as a major purchaser through Medicare, Federal Employees Health Benefits Program, CHAMPUS, and other programs. The federal government would not be regulating the quality measurement and accreditation industry, nor would it choose among competing technologies, thereby allowing innovations to continue to emerge. As states continue to re-evaluate their contracting and regulatory mechanisms, more states may develop deemed status arrangements.

Under these conditions, government purchasing power would be used to promote approaches to measurement and accreditation that are consistent with concepts of efficient markets for insurance as well as consumer protection. Making use of deemed status in this manner may be particularly important in the behavioral health care arena if the market failures outlined above are significant.

This discussion suggests that the interests of enrollees and consumers of health care may be underrepresented in existing measurement and accreditation processes. The federal government could also serve to enhance the significance of consumer input. First, existing governmental efforts such as SAMHSA's sponsorship of a consumer-oriented report card can be used to increase consumer input to the development of health plan rating systems. This has been done with some success under the SAMHSA report card project.

A second approach would be for the government to make use of information based on consumer groups' ratings of the raters. That is, organizations such as American Association of Retired Persons, National Seniors Health Cooperative, Consumers Union, and the National Alliance for the Mentally Ill could be asked to rate accreditation and quality measurement systems. This information could be incorporated into the federal government's decision to give an accreditation organization deemed status. Again, this approach uses the federal government's purchasing power to advance representation of the interests that the market may fail to give adequate weight.

SUMMARY

As discussed in Chapter 1, Chapter 2 and Chapter 3 of this report, quality measurement is complex and is evolving rapidly. This chapter has reviewed the existing means for quality assessment and has suggested some trends that may continue to develop in the future.

A previous IOM committee evaluated quality measurement activities for Medicare and developed a list of desirable attributes for a quality assurance program (IOM, 1990, p. 49). The present committee believes that the list is still appropriate and is a fitting closing for this chapter (see Table 6.4).

TABLE 6.4. Desirable Attributes of a Quality Assurance Program.

TABLE 6.4

Desirable Attributes of a Quality Assurance Program.

REFERENCES

  • AMBHA (American Managed Behavioral Healthcare Association, Quality Improvement and Clinical Services Committee). 1995. Performance Measures for Managed Behavioral Healthcare Programs. Washington, DC: American Managed Behavioral Healthcare Association.
  • Board of Governors of the Federal Reserve System. 1994. The Federal Reserve System: Purposes and Functions. Washington, DC: Board of Governors of the Federal Reserve System.
  • CARF (The Rehabilitation Accreditation Commission). 1996. Standards Manual and Interpretive Guidelines for Behavioral Health. Tucson, AZ: The Rehabilitation and Accreditation Commission.
  • CMHS (Center for Mental Health Services). 1995. MHSIP Consumer-Oriented Mental Health Report Card: Phase II Task Force. Washington, DC: Center for Mental Health Services.
  • COA (Council on Accreditation of Services for Families and Children). 1996. a. Standards for Agency Management and Service Delivery. New York, NY: Council on Accreditation of Services for Families and Children.
  • COA. 1996. b. Council on Accreditation Profile. New York: Council on Accreditation of Services for Families and Children.
  • COA. 1996. c. Council on Accreditation Recognition Report. New York: Council on Accreditation of Services for Families and Children.
  • Cody P. 1996. CMHS offers states grants to test performance indicators. Mental Health Report 20(13): 114.
  • Council of Better Business Bureaus, Inc. 1996. The Better Business Bureaus World Wide Web Homepage. [ http://www​.igc.org/cbbb ]. September.
  • DHHS (U.S. Department of Health and Human Services). 1990. Healthy People 2000. Washington, DC: Public Health Service, U.S. Department of Health and Human Services.
  • DHHS. 1992. Prevention 91/92. Washington, DC: Public Health Service, U.S. Department of Health and Human Services.
  • DHHS. 1995. Healthy People 2000: Midcourse Review. Washington, DC: Public Health Service, U.S. Department of Health and Human Services.
  • Digital Equipment Corporation. 1995. HMO Performance Standards. Maynard, MA: Digital Equipment Corporation.
  • Donabedian A. 1982. Explorations in Quality Assessment and Monitoring: The Criteria and Standards of Quality. Vol. 2. Ann Arbor, MI: Health Administration Press.
  • Ellwood PM. 1988. Outcomes management: A technology of patient experience. The New England Journal of Medicine 318(23):1549-1556. [PubMed: 3367968]
  • FAA (Federal Aviation Administration). 1996. The Federal Aviation Administration World Wide Web Homepage. [ http://www​.faa.gov ]. September.
  • FACCT (Foundation for Accountability). 1995. Guidebook for Performance Measurement Prototype. PortIand, OR: Foundation for Accountability.
  • FASB (Financial Accounting Standards Board). 1996. The Financial Accounting Standards Board World Wide Web Homepage. [ http://www​.rutgers.edu​/Accounting/raw/fasb/home.htm ]. September.
  • FDA (Food and Drug Administration). 1995. The Food and Drug Administration World Wide Web Homepage. [ http://www​.fda.gov ]. September.
  • FDIC (Federal Deposit Insurance Corporation). 1996. The Federal Deposit Insurance Corporation World Wide Web Homepage. [ http://www​.fdic.gov ]. Septmeber.
  • FTC (Federal Trade Commission). 1996. The Federal Trade Commission World Wide Web Homepage. [ http://www​.ftc.gov ]. September.
  • Garnick DW, Hendricks AM, Comstock CB. 1994. Measuring quality of care: Fundamental information from administrative datasets. International Journal for Quality in Health Care 6:163-177. [PubMed: 7953215]
  • GPO (U.S. Government Printing Office). 1994. United States Code. Washington, DC: U.S. Government Printing Office.
  • HIAA (Health Insurance Association of America). 1996. Sourcebook of Health Insurance Data, 1995. Washington, DC: Health Insurance Association of America.
  • IOM (Institute of Medicine). 1989. a. Controlling Costs and Changing Patient Care? The Role of Managed Care. Washington, DC: National Academy Press.
  • IOM. 1989. b. The Future of Public Health. Washington, DC: National Academy Press.
  • IOM. 1990. Medicare: A Strategy for Quality Assurance. Vol. 1. Washington, DC: National Academy Press.
  • JCAHO (Joint Commission on Accreditation of Healthcare Organizations). 1995. Accreditation Manual for Mental Health, Chemical Dependency and Mental Retardation/Developmental Disabilities Services—Standards. Vol. 1. Oakbrook Terrace, IL: Joint Commission on Accreditation of Healthcare Organizations.
  • JCAHO. 1996. Comprehensive Accreditation Manual for Health Care Networks. Oakbrook Terrace, IL: Joint Commission on Accreditation of Healthcare Organizations.
  • Mattson MR, editor. , ed. 1992. Manual of Psychiatric Quality Assurance. Washington, DC: American Psychiatric Association.
  • NAIC (National Association of Insurance Commissioners). 1995. A Tradition of Consumer Protection. Washington, DC: National Association of Insurance Commissioners.
  • NAIC. 1996. a. Health Care Professional Credentialing Verification Model Act. Washington, DC: National Association of Insurance Commissioners, Adopted June 1996.
  • NAIC. 1996. b. Quality Assessment and Improvement Model Act. Washington, DC: National Association of Insurance Commissioners, Adopted June 1996.
  • NCQA (National Committee for Quality Assurance). 1996. a. HEDIS 3.0 Draft for Public Comment. Washington, DC: National Committee for Quality Assurance.
  • NCQA. 1996. b. Accreditation Standards For Managed Behavioral Healthcare Organizations. Washington, DC: National Committee for Quality Assurance.
  • OSHA (Occupational Safety and Health Administration). 1996. The Occupational Safety and Health World Wide Web Homepage. [ http://www​.osha.gov ]. September.
  • Rodriguez AR. 1985. The CHAMPUS Psychiatric and Psychological Review Project. Psychiatric Peer review: Preclude and Promise. Washington, DC: American Psychiatric Press.
  • Rodriguez AR. 1988. An introduction to quality assurance in mental health. In: Stricker G, editor; , Rodriguez AR, editor. , eds. Handbook of Quality Assurance in Mental Health. New York: Plenum Press.
  • SAIC (Science Applications International Corporation). 1995. A Comparison of JCAHO and NCQA Quality Oversight Programs. National Quality Monitoring Project, Task 1b, Submitted to the Office of the Assistant Secretary of Defense, Health Affairs. Beaverton, OR: Science Applications International Corporation.
  • SAMHSA (Substance Abuse and Mental Health Services Administration). 1996. Mental Health Measures in Medicaid HEDIS. Washington, DC: Center for Mental Health Services, U.S. Department of Health and Human Services.
  • SEC (Securities and Exchange Commission). 1996. The Securities and Exchange Commission World Wide Web Homepage. [ http://www​.sec.gov ]. September.
  • Slaven T. 1996. Personal communication to the Committee on Quality Assurance and Accreditation Guidelines for Managed Behavioral Health Care. Rehabilitation Accreditation Commission. May.
  • Slee V. 1974. PSRO and the hospital's quality control. Annals of Internal Medicine 81:97-106. [PubMed: 4600878]
  • URAC (Utilization Review Accreditation Commission). 1996. National Network Accreditation Standards. Washington, DC: Utilization Review Accreditation Commission.
  • Valdez RO. 1996. Presentation at the Public Workshop of the Committee on Quality Assurance and Accreditation Guidelines for Managed Behavioral Health Care. Washington, DC. April 18.
Copyright 1997 by the National Academy of Sciences. All rights reserved.
Bookshelf ID: NBK233223

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (7.0M)

Related information

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...