U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Center for Substance Abuse Treatment. Substance Abuse: Administrative Issues in Outpatient Treatment. Rockville (MD): Substance Abuse and Mental Health Services Administration (US); 2006. (Treatment Improvement Protocol (TIP) Series, No. 46.)

Cover of Substance Abuse: Administrative Issues in Outpatient Treatment

Substance Abuse: Administrative Issues in Outpatient Treatment.

Show details

Chapter 6. Performance Improvement and Outcomes Monitoring

A program executive director was talking with a member of her clinical staff one afternoon. In the course of the conversation, she asked the supervising counselor, “So tell me, how's your group doing?” The supervising counselor replied, “Oh, they're doing really great.” The director then asked, “How do you know?” This question stumped both the director and counselor because, as in many programs, they relied on intuitive assessments of clinical performance—they had no objective way of monitoring performance.

Without objective indicators of performance, it is difficult to know how effective a treatment program is, whether its performance is improving or worsening. This chapter examines approaches for measuring and improving the performance of outpatient treatment (OT) and intensive outpatient treatment (IOT) programs, using objective performance data.

The term “performance improvement” is used in this chapter to include similar approaches, such as “quality improvement,” “continuous quality improvement,” “quality assurance,” “total quality management,” and “human performance technology.”

Performance improvement, which is a set of processes used to improve a clinic's outcomes, need not be complex or expensive. Providers need to consider how they can integrate commonsense performance improvement into their daily treatment activities. Some providers may not realize that they probably are collecting data already that can be used to conduct performance improvement.

Performance improvement and outcomes monitoring are becoming required elements in health service delivery. Outcomes monitoring has long been important to industry and health care because it provides an excellent and efficient mechanism for improving productivity and care (Mecca 1998). Performance improvement can increase revenues by improving service delivery, reducing costs, and increasing client satisfaction (Deming 1986).

An emphasis on performance improvement ought not to be considered a burden. The viability of the substance abuse treatment field depends on establishing the effectiveness of its services. Performance improvement has a critical mission: to use objective information to improve outcomes continually by

  • Identifying opportunities for improvement
  • Testing innovations
  • Reporting the results to the relevant stakeholders

Program and management staffs should consider making performance improvement a central element of their program's administrative plan.

This chapter focuses on

  • Types of instruments and measures that are useful for providers in improving treatment outcomes
  • How to establish an ongoing performance improvement program for staff and the clinic as a whole
  • How to involve program staff in a collaborative and positive way as an outcomes improvement plan is being designed and implemented
  • Positive actions that can be taken in response to performance evaluation findings

Increasing Importance of Outcomes Measures in Today's Funding Environment

As financial support from Federal and State sources, insurance companies, and managed care organizations (MCOs) has diminished, funding sources and taxpayers increasingly are demanding that money be spent only on the most effective programs. Although there have been many discussions about outcomes over the last 20 years, several forces are now at work that will make performance outcomes monitoring and improvement priorities for providers and payers.

Today, licensing and credentialing bodies and payers have prioritized performance improvement initiatives. The two major accreditation bodies in the addictions field—the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) and the Commission on Accreditation of Rehabilitation Facilities (CARF)—have made performance and quality improvement initiatives a cornerstone of their accreditation processes. CARF expects agencies to measure efficiency, effectiveness, and client satisfaction. Both bodies have published manuals on performance improvement for behavioral programs (Joint Commission on Accreditation of Healthcare Organizations 1998; Wilkerson et al. 1998).

Some States include outcomes monitoring as part of their licensing procedures, and a few States—California, Connecticut, Delaware, Illinois, and New York—are establishing permanent statewide outcomes monitoring systems. Some funding sources, such as MCOs, consider performance monitoring an important part of contract oversight and are finding ways to provide incentives for improving performance outcomes. Other payers are developing incentives and sanctions that are dependent on outcomes; private organizations are developing recommendations for performance improvement in behavioral health organizations (McCorry et al. 2000).

States are required by the Government Performance and Results Act of 1993 to implement procedures for funding public programs based on their performance. The Federal Government and the States are developing the indicators and procedures for a performance-based system.

Since the mid-1990s, the U.S. Department of Health and Human Services, particularly through the Center for Substance Abuse Treatment (CSAT), has sponsored and funded major initiatives and pilot studies designed to help States and the field develop substance abuse treatment performance indicators, databases, and information systems that can be used in outcomes monitoring and performance improvement. Some of these Federal initiatives include the Methadone Treatment Quality Assurance System (Phillips et al. 1995), National Treatment Outcomes Monitoring System (NTOMS), and Drug Evaluation Network System (Carise et al. 1999).

The Institute of Medicine has published reports on performance measures in behavioral health (Institute of Medicine 1997a , 1997b ). A National Research Council report, based on extensive research and regional meetings with representatives of the treatment field, recommends that the activation of the State performance-based compensation system be delayed in behavioral health care until appropriate performance indicators are developed (Perrin and Kostel 1997). Clearly, performance-based compensation is likely to be adopted as a central part of how treatment is funded.

Improving Performance and Monitoring Outcomes

The relationship between performance improvement and outcomes monitoring is illustrated best with an example. The general public is interested in reducing the number of highway deaths that occur each year. To achieve this end, it is necessary to conduct performance improvement and outcomes monitoring. To monitor improvement, it is necessary to track the number of highway deaths that occur annually. However, simply monitoring the increases and decreases in the number of yearly deaths tells little about whether specific initiatives undertaken by States, Federal agencies, car manufacturers, and drivers are reducing the rate of highway mortality.

Because the goal is to reduce the number of highway deaths, it is critical to assess specific initiatives aimed at eliminating highway deaths. To determine which initiatives are most effective, and therefore worthy of replication, it is necessary to isolate a specific initiative to observe its effect on outcomes. For example, public service announcements encouraging seatbelt use air nationally on TV and radio for 6 months. At the end of this period, outcomes monitoring determines whether the ad campaign affected the number of highway deaths. Performance improvement is the process of developing and testing the effectiveness of specific initiatives (e.g., promoting increased seatbelt use) that are designed to achieve the desired outcomes (e.g., lowering the number of highway deaths).

Trying new solutions to problems and monitoring the results of the new approaches are the essence of performance improvement. The desired outcome for most substance abuse treatment programs is to increase the number of individuals who achieve abstinent, productive, and healthy lives. This broad outcome needs to be broken into components, such as reductions in HIV-risk behavior, increases in employment, and reductions in arrests. To achieve these outcomes, programs need to test whether their clinical and administrative initiatives are effective in producing improvements on key performance indicators. Performance indicators are criteria that can be measured to indicate whether the desired outcomes have been achieved. Performance indicators include client persistence in treatment (engagement), client ratings of therapeutic alliance, client attendance rates, success of transfer from intensive outpatient treatment to outpatient treatment, satisfaction of referral sources, client satisfaction, and client self-reported abstinence.

Programs can measure their efforts at helping clients achieve the desired outcomes by identifying specific performance indicators, measuring them regularly, and testing whether specific initiatives lead to improvements in those performance indicators.

Measuring Performance and Outcomes

Currently, no objective national standards exist for average rates of engagement, retention, and abstinence in different types of treatment programs. The Federal Government is attempting to set up national databases that will help establish standards and ranges of acceptable outcome rates. NTOMS is a CSAT initiative to provide periodic reporting on access to and effectiveness of drug abuse treatment, using a nationally representative sample of clients. NTOMS plans to collect information from 84,000 clients and 250 treatment facilities; data should be available after 2006.

Types of Outcome and Performance Measures

This section describes selected performance indicators, how they are calculated, and special considerations associated with each. A discussion of how executives can use these data to help manage their clinics follows later in the chapter.

Engagement rate

A critical performance measure is a clinic's rate of engagement for new clients. About 50 percent of clients in outpatient treatment drop out within the first month (Fishman et al. 1999), and many clients are no longer in treatment by their third, fourth, or fifth scheduled appointment. For both financial and clinical reasons, many programs monitor how effective the program and its clinicians are in engaging clients.

Because treatment programs expend disproportionate resources during the first few sessions, clients who drop out after only one to three sessions adversely affect the financial status of a program. In addition, early client dropout represents a lost opportunity to help a client and to affect the incidence of substance abuse in a community. Retention in treatment has been linked to better long-term outcomes (Fishman et al. 1999). At the first session, the client expresses an interest (regardless of the source of motivation) in participating in treatment. The client's “disappearance” before session four or five raises important questions: Why didn't the client return? What changed? and How can the program reduce the incidence of such premature disengagement from treatment?

Clinicwide engagement rate.The engagement rate at the clinic level is a simple calculation. The calculation is the total number of clients who attend the third (or fourth or fifth) treatment session, divided by the total number of clients who were admitted into treatment at an initial intake evaluation. This calculation can be completed weekly, monthly, quarterly, or annually.

Clinic engagement rate = Total number of clinic clients attending their third scheduled session/Total number of clients admitted into the clinic

The more frequently a program measures its engagement rate, the better able staff and management will be to track the performance measure. Any trend toward increased dropouts over time should lead management and staff members to examine carefully the contributing factors. If a program determines the engagement rate only annually, it will not be able to identify or respond to trends. However, if a provider does not have a management information system designed to collect and calculate engagement rates automatically, measuring this outcome on a weekly basis may be a burden. Monthly monitoring of the engagement rate may be a reasonable balance between burden and benefit.

Counselor-specific engagement rates.It is even more useful to look at how engagement rates differ among clinicians. Monitoring and reporting engagement rates to clinicians enable them to determine whether changes in their approach result in improved retention rates.

The calculation equals the total number of clients initially seen by a clinician who attended the third (or fourth or fifth) treatment session, divided by the total number of newly admitted clients assigned to that clinician.

Counselor-specific engagement rate = Total number of clients attending third scheduled session with counselor X/Total number of clients admitted into counselor X's caseload

It would be misleading to compare engagement rates for one clinic or clinician with those of another without adjusting for the case mix. Dramatic differences will be found between clinics and between clinicians, and these differences may be due largely to client characteristics rather than differences in clinic quality or clinician skill levels. For example, engagement rates (and other performance indicators and outcomes) vary depending on client characteristics and other factors:

  • The clients' living arrangements (e.g., clients living in shelters tend to do less well than those with stable housing)
  • Employment status
  • Co-occurring conditions (e.g., clients with significant mental disorders in addition to their substance use disorder tend to do less well)
  • The substance abused

By providing clinicians with information they can use to guide their improvement efforts, executives empower their staff members to improve their own performance.

Attendance rate

The clinic's attendance rate consists of the total number of treatment program sessions that are attended by clients in a given period, divided by the total number of treatment sessions that were scheduled for those clients in that period.

Attendance rate = Total number of sessions attended/Total number of sessions scheduled

As an example, if the clinic had 300 client encounters scheduled over a 1-month period, but clients actually attended only 90 treatment encounters, the retention rate would be 90 divided by 300 or 30 percent.

Administrators may find it useful to monitor attendance rates for individual counselors as well as for the clinic as a whole. Observations of who is achieving the highest and lowest attendance rates may identify treatment strategies and clinical styles that are more or less effective, counselors who need closer clinical supervision or additional training, and case mixes that are inequitable. This information should be used to help all clinicians improve, regardless of their attendance rates.

Retention rate

A retention rate indicates what percentages of clients remain in treatment.

Retention rate = Total number of weeks clients remained in treatment/Total number of clients admitted
Step 1. For each client who entered treatment during the period under consideration (e.g., the first quarter of the year), determine how long that client remained active in treatment. It is important to select an objective measure for “active in treatment,” such as “attended at least one or more treatment sessions within 2 weeks.” For each client admitted during the period under study, calculate the total number of weeks in treatment.
Step 2. Add the total number of weeks clients remained in treatment and divide by the total number of clients admitted.

A clinic might be interested in knowing what percentage of clients stay in treatment for 3 months. This measure might identify a program that is successful at achieving a high engagement rate but is losing many clients after the fifth or later treatment session. For example, if 100 clients entered treatment between January 1 and March 31, the clinic would add the total number of weeks that each client remained in treatment during that period and divide that number by 100 (the total number of clients).

It may be valuable to know the differences in retention rates among groups of particular interest, such as clients referred by the criminal justice system versus all other referred clients, women versus men, or payer X versus payer Y. Such studies can have important implications for resource allocation, funding, and staffing.

Abstinence rate

Rigorous monitoring of abstinence depends on reliable tests, such as the urine drug screen, Breathalyzer™ test, or saliva test. Each test has costs associated with it, which may preclude its use. It is important for clinics to track the abstinence rate because a clinic could have extremely high engagement or retention rates while its clients are still using substances. Abstinence can be measured easily, so long as objective measures, such as urine drug screens, are being used with some frequency.

The abstinence rate is arrived at by dividing the total number of negative test results obtained during a specified period by the total number of tests administered in that period. If during January a clinic administered 200 urine drug screens and 88 tested negative, the abstinence rate would be 88 divided by 200 or 44 percent.

Clinic abstinence rate = Total number of negative test results/Total number of tests administered

In monitoring abstinence rates, it is best to apply a specific timeframe to the client group being assessed. For example, a clinic may wish to calculate abstinence rates only for clients who have been in treatment at least 2 weeks. It may take that long for clients to begin achieving abstinence and for most drugs to clear from their systems. (Marijuana is eliminated from the body slowly, so clients who have been abstinent and in treatment for less than 1 month could still test positive.) Together clinic staff and management need to develop abstinence rate timeframes appropriate for their facility. One approach is to compare the abstinence rates of clients who have been in treatment for 2 weeks with those who have been in treatment for 6 weeks or longer. Abstinence rates should increase with more time in treatment.

Drug screens should be administered consistently to all eligible clients. For example, if a clinic gives drug screens only to clients who are doing poorly, the clinic will have abstinence rates that reflect the performance of its most challenged (and challenging) clients. If drug screens are given to all clients equally, the abstinence rate obtained will reflect more accurately the clinicwide abstinence rate. (For more information on drug screens, see appendix B in TIP 47, Substance Abuse: Clinical Issues in Intensive Outpatient Treatment [CSAT 2006b ].)

For some clinics, the costs of administering drug screens may be prohibitive. Although drug screens objectively measure abstinence, self-reported abstinence can be useful under certain conditions. The accuracy of self-reported data varies depending on the consequences associated with reporting current substance use (Harrison 1997). For example, a client will underreport drug use if use can result in being returned to jail, losing custody of a child, or being terminated from employment. Although self-reported abstinence alone is a less than ideal measure of abstinence, for many treatment programs it may be the only basis available for an abstinence outcome measure.

For long-term rates, self-reported abstinence may be determined during a followup telephone interview, perhaps 6 months after discharge. If the followup call is made by the client's former counselor, the client may be more reluctant to admit use than if the call is made by a staff member or researcher with whom the client has no history. Clients often do not wish to disappoint their former counselor by acknowledging that they are having difficulty and have relapsed.

Quality-of-life indicators

Problem-specific monitoring.It is important to know whether treatment has not just influenced clients' substance use problems but also positively influenced other areas of their lives.

Problem-specific monitoring may be particularly important if the mission or funding of the clinic is associated with behavioral domains. For example, a treatment facility connected with Treatment Accountability for Safer Communities or a drug court might be interested in the extent to which its program is reducing clients' criminal activities. A treatment program also might be concerned with whether its interventions are reducing behaviors that put clients at high risk for contracting infectious diseases. In either case, assessments might be administered at different points during the treatment process and after discharge to see how well clients are functioning and to track changes in behavior or status.

A program might track information needed by its funding or referral sources (e.g., drug court). Improvements in clients' employment, education, and family relationships can be important to funders and the public. The more a program is able to document the positive effect of its efforts, the better it will be able to justify its funding and argue for additional funding. It is most impressive if a program is able to establish that treatment still is having an effect several months after a client's discharge. But the followup monitoring required to obtain these data is more expensive and difficult to do than monitoring while the client is in treatment.

Support group participation.Involvement in support groups, such as 12-Step programs and other mutual-help groups, is another way of measuring continued sobriety and a client's determination to remain abstinent. Followup calls may include questions about the number of support group meetings a client has attended in the previous week or month, whether the client has spoken with his or her sponsor in the previous month, and whether the client has a home group. Programs can monitor whether their efforts lead to improvements in these important performance indicators by quarterly or biannually assessing the rate of self-reported meeting attendance.

Other quality-of-life indicators.The following quality-of-life indicators often are included on existing statewide databases and can be monitored with varying degrees of difficulty:

  • Reductions in arrests, convictions, and incarcerations
  • Reductions in hospitalization for mental illness
  • Increased participation in afterschool programs
  • Decreases in school dropout rates
  • Reductions in use of welfare benefits and food stamps and in open child welfare cases
  • Reductions in emergency room visits and other hospitalizations
  • Increases in employability; increases in wages and number of days worked
  • Reductions in social costs caused by intoxicated drivers and lost workdays
  • Increases in the rates of birth of healthy, drug-free babies
  • Improvements in school participation

These indicators usually are not generated on an individual clinic level but are of interest to many stakeholders.

Client satisfaction

For decades, businesses and industry have focused on measuring customer satisfaction, and this information can be valuable for OT programs. Client satisfaction provides information about the performance of both individual staff members and the clinic as a whole. For example, increasing client satisfaction may be a way to increase treatment engagement, attendance, and retention. In addition, health service providers, including treatment providers, increasingly are called on to monitor client satisfaction. Client satisfaction data point to possible causes and solutions for substandard performance. Surveys showing that clients are dissatisfied may help staff members and managers understand why retention or even abstinence rates have decreased. No nationally recognized client satisfaction survey currently exists for substance abuse treatment providers. Appendix 6-A presents a client satisfaction form that has been designed specifically for use with IOT clients. Client satisfaction forms usually are divided into three sections:

  • Client satisfaction with clinic services, such as client education materials, counseling groups and educational sessions, adequacy of the facility, individual attention, and overall benefit of treatment. Clinic administrators and staff members should decide which items to include, choosing those perceived as most important to the quality of care and those of greatest concern in their service delivery.
  • Client satisfaction with the counselors, which asks the client to identify his or her counselors by name and provide feedback on their effectiveness. Areas to explore include each counselor's warmth, empathy, insight, knowledge and competence, attentiveness, and responsiveness. Staff members and clients should help select and word the criteria by which counselors will be rated.
  • Confidential descriptive (demographic) information about the respondent, including age, gender, ethnicity, and sexual orientation. The information from this section can help a clinic determine such factors as whether women are more satisfied with a program than are men or whether clients with a disability are as satisfied with treatment as are other clients. The survey should be administrated to a specific client population. Programs can use the findings to guide changes in training, staffing, or programming.

Satisfaction of referral sources

Conducting a structured telephone interview with the program's key referral and funding sources at 3-month intervals can elicit considerable information about how the program and staff are viewed. Such calls can be a check on whether the program is providing each referral source with the information the agency needs in a timely, helpful fashion. The interviews can identify areas of complaint or potential friction before difficulties or misunderstandings escalate into problems. These telephone calls also can be used to explore new opportunities for expanding or refining services. (See appendix 6-B for a sample form.)

Success of client transfer

An important measure of a program's effectiveness is the percentage of clients who have transferred successfully to and been retained in long-term, low-intensity outpatient services following completion of an IOT program.

Client dropout rate

Another valuable approach to performance improvement is to conduct studies of clients who have dropped out of treatment. Because early treatment sessions are the most expensive, clients who drop out of treatment represent, in many ways, the greatest loss to a program. A study designed to understand better who drops out of treatment and why can help guide changes in the program that ultimately yield great benefit.

To conduct such a study, the program can conduct telephone interviews of the last 50 or 100 clients who dropped out of treatment. The interviews should be done by an independent (noncounseling) staff member, such as a student intern or an assistant. The caller states that the purpose of the call is to

  • Determine what factors led the client to leave treatment
  • Gather information for a program evaluation
  • Help the program be more effective and responsive to the needs of its clients

One result of open-ended interviews is that patterns of comments often emerge. A preponderance of similar responses can indicate that changes are needed. For example, a program whose client population was overwhelmingly male conducted a study of women who had dropped out. The study confirmed that the women had dropped out because group sessions were dominated by male viewpoints, and the women felt their concerns were not being addressed.

When conducting a dropout study, the caller should include an invitation to each client to return to treatment. The invitation may be all that is needed to reengage a client in the recovery process.

Important Considerations When Measuring Performance

Programs might examine any performance measures that will provide meaningful and helpful information about how the clinic, individual clinicians, and clients are doing. Outcomes can be calculated based on drug of choice, referral source, funding source, housing status, gender, co-occurring conditions, or other factors. Exhibit 6-1 describes two evaluation resources.

Exhibit 6-1. Evaluation Resources

Demystifying Evaluation: A Manual for Evaluating Your Substance Abuse Treatment Program, Volume 1 (CSAT 1997a ), is a CSAT publication designed to help administrators understand and undertake the evaluation process. It includes useful examples of surveys and evaluation instruments, as well as a general discussion of the evaluation process. The book is set up in self-study modules that cover
• Evaluation strategies. Models of outcome evaluation based on differing levels of available resources
• Strategies for measuring effort. Techniques for describing and quantifying treatment services
• Ways to understand substance abuse in the community. Strategies for assessing the extent to which clients' substance use problems are typical of the community's substance use problems
• Resources available. Evaluation aids that are available locally and nationally
Measuring and Improving Cost, Cost-Effectiveness, and Cost-Benefit for Substance Abuse Treatment Programs: A Manual (Yates 1999) is a National Institute on Drug Abuse publication that includes step-by-step instructions, exercises, and worksheets to guide executives through collection and analysis of data. The manual is designed to be used by people from a variety of educational and professional backgrounds who have little or no training in accounting. It explores several ways to determine cost-effectiveness, from educated estimates to sophisticated computer models.

It is also important to consider the timeframe over which the program will measure outcomes. Attendance and engagement measures might be obtained monthly because they have a major effect on a clinic's revenues.

No matter which performance criteria the program chooses to track, it is not wise to begin by focusing on all measures simultaneously. Performance measures should be phased in, starting with monitoring engagement, followed by other measures selected by the clinical team.

Outcomes Measuring Instruments

Different measurement instruments are needed for special populations, general treatment populations, and treatment services.

Special Population Measures

Program and client outcome indicators will be different for different treatment groups. Clients with co-occurring disorders may have a different threshold for attendance than clients without these disorders. Other meaningful outcomes for this group include medication compliance, decrease or increase in psychiatric symptoms, and rehospitalization. Similarly, special outcomes indicators may be appropriate for pregnant women (e.g., delivery complications and birth outcomes).

General Treatment Population Measures

Addiction Severity Index

The Addiction Severity Index (ASI) is a useful outcome measurement tool (McLellan et al. 1992b ) that helps assess a client's treatment needs. Free copies of the ASI and guidelines for using it can be downloaded from www.tresearch.org. The ASI is a standardized instrument that has good reliability and validity and can be used to collect information for comparison across sites and at different points in time.

Trained staff members should administer the ASI

  • On client intake to provide baseline status and aid in treatment planning
  • At meaningful intervals during treatment to measure progress (e.g., after 3 months)
  • At discharge to assess outcomes in seven key areas of a client's life, including recent substance use and legal, occupational, medical, family/social, and psychiatric statuses
  • At 3 months after discharge to measure long-term change in the client's status and behavior

The ASI provides programs with

  • A subjective client evaluation and an interviewer evaluation of problem severity; both can provide indications of client treatment priorities
  • An easy-to-use monitoring tool for clinical supervision; severity scores in each problem domain can be checked quickly to ensure adequate treatment planning
  • The capability to monitor outcomes of discharged clients through followup; clients can be reevaluated using the ASI followup interview
  • A research tool to derive objective and subjective measures of need and improvement
  • A rich database through which issues such as the relationship between client characteristics and outcome data can be examined
  • A standardized assessment instrument that permits comparisons across programs and levels of care

Risk Assessment Battery

A frequently used measure for risk of infectious disease is the Risk Assessment Battery (RAB), which is self-administered. Monitoring of risk reduction for infectious diseases might involve administering the RAB at intake, after 2 months of treatment, at discharge, and then 1 to 3 months after discharge. (Visit the Treatment Research Institute Web site at www.tresearch.org to download this instrument.)

Treatment Services Measures

The Treatment Services Review (TSR), developed to complement the ASI, corresponds to ASI categories (McLellan et al. 1992a ). This instrument is a 5- to 10-minute structured interview designed to provide information on the number and frequency of services received in each area. It yields a rating of the services delivered. Other important measures of client-level service delivery include the number of individual counseling sessions, number of group counseling sessions, number of urine tests and Breathalyzer checks, and length of stay.

Program Monitoring for Special Purposes

Monitoring New Treatment Interventions or Program Services

Program management and staff may be particularly interested in monitoring performance before and shortly after implementing new components, approaches, or initiatives. Program administrators may use the ASI, RAB, TSR, or specialized measures designed to capture the effect of program innovations. For example, if a program is developing a 24-hour oncall service, the administrators may want to know whether the service increases the number of new clients. The study might track intakes for 3 months before implementation of the new service and at 3 to 6 months after implementation of the service and compare results.

The rates for attendance, engagement, and abstinence are appropriate measures to apply to new services. Similarly, feedback from referral sources and clients who dropped out can be valuable for assessing a new service.

Monitoring in Response to Program Difficulties

When managers notice a problem (e.g., an increase in client complaints) or are made aware of the occurrence of even a single adverse event (e.g., a client complaint of sexual harassment), they might begin monitoring key indicators. These adverse events are sometimes referred to as “sentinel events.” For example, if a woman reports that she finds the treatment environment “hostile toward women,” a clinic might begin evaluating client satisfaction by gender weekly or monthly. Similarly, programs might monitor engagement and attendance rates after a clinic has moved or there has been high staff turnover. These changes are likely to disrupt operations, so monitoring might be particularly helpful at these times.

Once staff members and managers have collected data, they can analyze them objectively, develop solutions to problems, and refine policies and practices.

Accreditation Issues

Some States have adopted performance outcomes monitoring programs, and treatment programs in those States presumably already are aware of State requirements. Programs accredited by CARF or JCAHO will need to meet specific requirements. However, because both accrediting organizations are emphasizing quality assurance or performance improvement activities, staff and management may wish to visit the Web sites of these organizations to learn about their specific requirements (www.carf.org and www.jcaho.org).

Implementation of the ASI, RAB, or TSR will help a provider fulfill the requirements of the accreditation bodies.

Working With Staff on Performance and Outcomes Improvement

Before initiating performance outcomes and improvement processes, program administrators should meet with staff members to discuss the importance of monitoring. The rationale for performance monitoring should be clear. Collecting and analyzing performance data have a practical benefit for the program and will improve service to clients.

All staff members should know that performance monitoring can identify needs for additional training, resources, policy changes, and staff support—improvements the organization needs to make as a system. It is important for staff members to understand that the objective measures are being implemented to improve treatment outcomes and, wherever possible, to make it easier for staff members to work efficiently and effectively. Management should make clear that the results of the monitoring will not be used to punish employees: The program is initiating monitoring to receive feedback that will enable staff members and managers to improve.

Case Mix Effect

Performance outcomes may vary from clinic to clinic and from counselor to counselor—and for the same clinic and counselor over time. For example, one clinic may work primarily with employed clients who have stable families and a low incidence of co-occurring mental disorders. Another clinic may serve clients who are homeless, are dependent on crack, and have co-occurring mental disorder diagnoses. These two clinics likely will have different outcome rates on most dimensions. It should not be assumed that the clinic working with employed clients is better even though its objective outcomes are superior. The differences may be due exclusively to the clinic's case mix. Likewise, case mix differences between counselors can result in very different outcomes even for clinicians with comparable skills and experience.

Performance outcomes data should be used to improve the performance of all staff members—including managers and administrative support personnel. Staff members need to be confident that the administration understands the effects of different case mixes and other influences on performance. It is essential that an atmosphere of trust and partnership be created. A critical step in creating such an atmosphere is to ensure that staff members know why data will be collected and what will be done with them. This communication should take place before data collection begins; staff should be informed orally and in writing during an orientation session.

When data collection is complete, it is extremely important that data be handled with sensitivity, particularly considering differences in the case mix from therapist to therapist. When administrators acknowledge the effects of case mix, it is possible to present data about performance to therapists. Because the data are objective, they are often superior to the subjective performance monitoring measures that supervisors traditionally have used.

Avoiding Premature Actions

An administrator conducting performance improvement studies may be tempted to act prematurely based on initial results. Depending on the indicator (e.g., attendance), it is wise to wait several months before drawing any conclusions. If initial data are to be shared with staff, the administrator needs to emphasize that these data are preliminary and advise staff that the data themselves are not important but the process of collecting, discussing, and working to improve them is. The act of collecting and sharing outcome data with staff members improves performance without other interventions by management.

Dissemination of Study Findings

When introducing a performance improvement system, managers should create a team consisting of clinical, administrative, and support staff. In small organizations, all staff members are on the team. Large organizations can form a performance improvement team or quality council with staff, management, board, and payers. Program alumni representatives can be a valuable part of a performance improvement team. This team will identify the performance indicators that will be studied and will review and interpret the results. This group may recommend systemwide actions to improve outcomes.

Handling data in a confidential and sensitive manner

It is important to show sensitivity toward staff by handling data confidentially. This usually is done by presenting only clinicwide data—not data on individual performance—to the staff and the public at staff meetings or in reports to funders. For example, an administrator might discuss changes in risk-reduction measures at the level of the clinic, not for individual therapists.

Types of comparisons

It is natural and, under some conditions, beneficial for staff members to compare their performance with that of other staff members. However, counselors achieving the highest performance rates may be scoring well because of experience, training, case mix, random fluctuation, or unique talent. The goal is to help every counselor in the clinic improve over time. In other words, a counselor whose engagement rate has been 30 percent should be acknowledged for increasing the rate to 50 percent (even though the average rate in the clinic is 60 percent). Comparing a counselor who has a low engagement rate to the clinic average can lead to discouragement and even poorer performance (Kluger and DeNisi 1996). Such comparisons should be avoided. Administrators should focus on clinicwide data and improvement initiatives. Counselor-specific data should be released confidentially to individual counselors.

Strategies to encourage staff members' improvement

Effective strategies to improve performance include

  • Providing individualized confidential feedback about performance
  • Tracking changes in performance over time

Kluger and DeNisi (1996) reviewed more than 2,500 papers and 500 technical reports on feedback intervention conducted in a variety of settings. They noted that performance feedback interventions are most effective if the feedback is provided in an objective manner and focuses on the tasks to be improved. Feedback should address only things that are under counselors' control. Interventions that make the feedback recipients compare themselves with others can result in worse performance. (Data on the individual performance of counselors should be confidential and secured; these data can be presented as counselor A, B, C, etc.)

Feedback data can be used to encourage staff members who have shown exceptional improvement. Identifying a staff member of the month can be an incentive for achievement. The key is to recognize improvement publicly, based on objective data. This kind of recognition encourages new staff members to learn from their high-performing colleagues. Those who are performing consistently at the highest levels (known as positive outliers) can be acknowledged formally. These high achievers can be invited to give presentations, provide training, or recommend ways to improve the organization's performance. Under certain circumstances, arrangements can be made for counselors to observe productive counseling sessions. This kind of recognition should not be made too quickly; it should be based on at least 3 months to a year of performance data. Case mix could account for a counselor's consistently superior performance (e.g., the counselor treats the highest functioning clients).

Sharing the performance results

Once a performance improvement system is in place, the findings provide important information for three groups:

  • For the clinic, information about overall performance
  • For the counselor, information about individual performance
  • For the program's funding sources and other groups in the community, information about overall clinic performance

What should a program do if the monitoring system identifies less than satisfactory performance? This can be interpreted as good news in many ways. Knowing a problem exists is the first step in solving it. Clinic management can introduce interventions to improve performance, and the data collected will allow the effectiveness of the new interventions to be monitored. Moreover, because many programs do not have a rigorous performance improvement system, a program can distinguish itself by having these data—a potential advantage in securing funding.

Performance monitoring can reduce the frequency of disciplinary job actions (e.g., terminating someone for poor performance) because it focuses on objective measures. Performance monitoring results can lead to program changes, reallocation of resources, targeted training, and skills development. More important, when objective data are made available to all staff members, improvements may occur without any additional intervention—simply because people generally want to perform well (Deming 1986).

Taking Action To Improve Performance

Once data have been collected and shared with staff as a whole, decisions can be made about how to improve. All staff members can be involved in identifying strategies or interventions for improvement. Staff should focus on

  • Resource allocation. A reallocation of resources or funds could make a difference. For example, providing tokens for public transportation could help clients attend more sessions. Evaluations should include an examination of administrative issues. Adequate resources should be allocated to administrative needs.
  • Conditions causing differences in therapist outcomes. When a significant range of client outcomes among therapists exists, it is reasonable to look for explanations. If the conditions causing the variation can be identified, then ameliorating action can be taken. For example, monitoring the size of treatment groups may reveal that early dropouts usually occur when a group exceeds a certain size. Conditions to explore include client case mix, size of caseloads, adequacy of resources provided to counselors, and sufficiency of training available to counselors.
  • Factors that indicate program success. Measurement of the effectiveness of the program should be based on such variables as the length of client retention, the level of client participation, and the frequency and patterns of attendance. Measurement also includes monitoring client information, such as discharge status and program completion, relapse, and return to treatment.
  • Improvements in program structure. A sudden decrease in performance outcomes is a warning sign that suggests structural change needs to be—and can be—made. The program itself may be setting up obstacles to client retention. The performance improvement team might brainstorm to find strategies for systemwide improvement. Although the client is the most obvious customer, referral sources, funding sources, ancillary care providers, and clients' employers are customers as well. Soliciting feedback from these groups can provide useful information about strengths and weaknesses and recommendations for improvement.
  • A retrospective study. When a problem is identified, a retrospective study is an excellent and inexpensive method for exploring its causes and uncovering solutions to it. (The client dropout study mentioned earlier is an example of a retrospective study.) It can provide both qualitative and quantitative data and provides the clients' perspective. Exhibit 6-2 describes a retrospective study. Programs can gather data to help determine which variables accurately predict whether a client will stay in treatment and progress therapeutically. For example, programs can compare the baseline assessments of clients who complete treatment with the assessments of clients who drop out. All program participants, including those who have dropped out, should be assessed. In a State system with many types of treatment programs, cross-program analyses can be performed. When this is done, covariates such as case mix should be considered.

Exhibit 6-2. An IOT Program Outcomes Study: Findings and Actions

Rationale for the study. An urban IOT program was disturbed to find a low rate of retention and high rate of initial dropouts for all clinic therapists.
Study methods. A local university graduate student was hired to telephone 100 former clients who had dropped out of the program before their fourth session. Using a protocol, the graduate student conducted 5-minute interviews to find out why the clients had changed their minds about treatment and dropped out before completion.
Significant findings. Data analysis indicated the following:
• Objection to rigid policy requirements. The program rigidly required that all clients come to four sessions; some clients who were referred for driving while intoxicated (DWI) felt that they did not need that many sessions.
• Some clients inappropriately assigned to an IOT program. Many clients who dropped out had been referred to the clinic to fulfill requirements of a DWI violation and felt out of place among clients with long-term addictions.
• Early dissatisfaction among clients with severe substance use disorders. Clients with serious drug use problems (those with heroin and crack dependence) were the most likely to drop out. These clients strongly objected to the first admission session that was devoted to completing extensive paperwork needed to meet financial and admission requirements.
Significant actions taken. The program made the following changes, which increased the clinic's retention rate by 30 percent. Subsequently, 87 percent of clients admitted to the program completed their treatment—an extraordinary improvement.
• Rigid attendance requirements were eliminated. The four-session per week requirement was dropped; some clients did not need this level of care.
• A separate DWI program was established. Another program was set up for those needing only a brief substance abuse awareness program rather than IOT; these clients no longer participated in the more extensive program and were more likely to complete treatment.
• A special admission session was instituted. The new session for clients likely to drop out focused solely on the clients' needs. New clients meeting the high dropout profile (those injecting opioids and using crack cocaine) were offered a free 1½-hour session. No paperwork was done except to obtain legal informed consent. The session was designed specifically to identify and meet the clients' needs. This format increased the retention rate for this group.

Costs and Funding of Outcomes Improvement

Funds allocated to performance monitoring are an excellent investment because monitoring can lead to better treatment for clients, improved attendance and retention rates, increased revenue, and decreased program costs. Simple performance improvement studies can be conducted at little or no cost. Administrators might consider including client monitoring efforts in counselor job descriptions and providing time in counselors' schedules for followup calls to collect data. More ambitious assessments can be conducted inexpensively if the right staff members are recruited.

Programs may find assistance in conducting studies from local colleges or universities. Graduate students, people applying to graduate programs, or established researchers may be interested in working with programs. Faculty members at universities often know of graduate students who are competent to conduct such studies.

If the study is conducted well and proper consents and approvals are obtained, data from the study may be worthy of publication. For academics, the opportunity to conduct publishable research is often an inducement for conducting a study. However, a modest cash offer may increase graduate students' interest and guarantee that the study will be conducted professionally and run to completion. (It is advisable to withhold part of the payment until the study is completed.) A program in Philadelphia found that about $2,000 was sufficient incentive for a doctoral candidate to conduct a simple series of yearlong monitoring studies.

The use of independent researchers also can help assuage concerns about staff bias and conflicts of interest. When performance outcomes data have been analyzed by an independent body (such as a university researcher), the findings may be viewed as more objective and credible than if the data were analyzed in house. This is important for an organization that is using the data to demonstrate to funding sources that the program is achieving positive outcomes.

Finally, using independent researchers to study performance improvements may help bridge the gap that exists between academics and practitioners. Involving researchers in the improvement of treatment programs allows both groups to benefit from each other's expertise.

Using Performance Data To Promote the Program

In addition to providing information for making needed changes in the program, results of a performance improvement study can be used both as a fundraising and as a public relations tool. As Yates (1999) states,

Having solid reports of the effectiveness and cost-effectiveness of your program will assure donors that their contributions will have the maximum impact possible. . . . If your program saves substantially more money than it consumes, it will be easier to defend as a form of social investment that may deserve more attention and additional funds. (pp. 1–2)

One treatment provider invited the program's 50 primary funding and referral sources to a presentation of performance improvement study findings. The first year, some of the data were not encouraging, suggesting areas for improvement. However, the administrator was committed to using the results to improve program performance and continuing an objective, open evaluation process. The somewhat negative results demonstrated commitment to accurate presentations. The funding sources stayed committed. The next year the study results were compiled by an academic resource—an objective, respected researcher in the treatment field. Over the next 8 years, the performance measures steadily improved, demonstrating improved client outcomes.

Appendix 6-A. Satisfaction Form for Clients

Confidential—Please do not write your name on this form. This survey is designed to give you a chance to tell us what you think about the care you are receiving. After you have completed this form, please return it to a staff member. Thank you.

How satisfied have you been with . . .Not at allSlightlyModeratelyConsiderablyExtremely
1. The individual attention you are receiving from your counselor?01234
2. The information you are receiving about recovery?01234
3. The encouragement you are receiving from your counselor?01234
4. The support you are receiving from your counselor?01234
5. The services you are receiving from your counselor?01234
6. The way you are being treated by your counselor?01234
7. The written materials you are being given?01234
Your counselor is . . .
1. Warm, caring, and respectful.01234
2. Knowledgeable about recovery.01234
3. Helpful to you.01234

In your own words, tell us what you think would improve our program. Use the other side of the page if you need more space to write your answer.

What do you like least about our program? __________________________________________

What do you like best about our program?___________________________________________

____________________________________________________________________________

About how many sessions have you attended here? ________
Today's date: _____/_____/_____
Your counselor's name: _____________________
We want to know whether people are receiving different treatment because of their race, gender, or sexual orientation. If you are uncomfortable with any of these questions, please feel free to skip them.
Are you:___ Male ___ Female ___ White ___ African-American ___ Hispanic ___ Other
___ Heterosexual ___ Gay ___ Lesbian ___ Bisexual

Appendix 6-B. Satisfaction Form for Referral Sources

Name: ___________________________ Referral Source Contacted: __________________________

Phone: (________) ________-___________ Date Contacted: ______________________________

  1. How would you rate our oral communications (e.g., telephone calls, face-to-face interactions)?
    excellent very good average below average poor
    Comments: ________________________________________________________________
    ________________________________________________________________
  2. How would you rate our written communications?
    excellent very good average below average poor
    Comments: ________________________________________________________________
    ________________________________________________________________
  3. How would you rate our admissions process?
    excellent very good average below average poor
    Comments: ________________________________________________________________
    ________________________________________________________________
  4. How would you rate the professionalism and helpfulness of the program staff with whom you interacted?
    excellent very good average below average poor
    Comments: ________________________________________________________________
    ________________________________________________________________
  5. How would you rate our treatment program compared with other treatment programs you have used?
    much better somewhat better about the same somewhat worse much worse
    Comments: ________________________________________________________________

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (6.9M)

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...