NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Institute of Medicine (US) Committee on Data Standards for Patient Safety; Aspden P, Corrigan JM, Wolcott J, et al., editors. Patient Safety: Achieving a New Standard for Care. Washington (DC): National Academies Press (US); 2004.
Patient Safety: Achieving a New Standard for Care.
Show detailsTrivial events in nontrivial systems should not go unremarked (Perrow, 1984).
CHAPTER SUMMARY
Although near-miss events are much more common than adverse events—as much as 7–100 times more frequent—reporting systems for such events are much less common. As the airline industry has realized, analysis of near-miss data provides an opportunity to design systems that can prevent adverse events. Near-miss data for the health care domain should be analyzed more extensively than is currently the case. The data provide two types of information relevant to patient safety—on weaknesses in the health care system and, equally important, on recovery processes. The latter data are an underutilized source of valuable patient safety information. This chapter examines the functional requirements of near-miss systems and the implications for data standards.
With some exceptions, near-miss data (and adverse event data) should be examined in the aggregate to determine priorities for health care improvement. The analysis of aggregate event data requires the use of standardized taxonomies to describe the root causes of failure, recovery processes, and situational contexts uniformly. Since near misses and adverse events are thought to be part of the same causal continuum, there should be identical taxonomies for failure root causes and context variables for both types of events.
The development of near-miss systems works best when the systems are initially established and designed for the benefit of those delivering care, for example, a hospital department. Data from this level can be aggregated for higher-level purposes—reports for hospital-wide systems and domain-specific nationwide systems. However, uses of the data require that the same data standards be applicable across all domains and at all levels of aggregation. Near-miss systems should be an integral part of clinical care and quality management information systems. To foster data reuse across all health care applications, the same data standards should be used for all applications.
In safety management literature, a near miss is defined in various ways. According to one definition, a near miss is an occurrence with potentially important safety-related effects which, in the end, was prevented from developing into actual consequences (Van der Schaaf, 1992). Near misses are also synonymous with “potential adverse events” (Bates et al., 1995b) and “close calls” (Department of Veterans Affairs, 2002). In this report, a near miss is defined as an act of commission or omission that could have harmed the patient but did not cause harm as a result of chance, prevention, or mitigation. In most cases, definitions of a near miss imply a model such as the incident causation model (see Figure 7-1), consisting of the following components or phases (Van der Schaaf, 1992):
- Initial failures—some instigating failure process (triggered by a human error, a technical or organizational failure, or a combination of the two).
- Dangerous situation—a state of temporarily increased risk resulting from an initial failure but still without actual consequences.
- Inadequate defenses—a failure of the official barriers (such as double-check procedures, automatic compensation by standby equipment, or problem-solving teams) built into the system to deal with this risk.
- Recovery—a second informal set of (mainly human-based) barriers by which a developing risky situation is detected, understood, and corrected in time, thus limiting the sequence of events to a near-miss outcome instead of letting it develop further into an adverse event or worse.
According to the incident causation model, near misses are the immediate precursors to later possible adverse events. Examining near misses provides two types of information relevant for patient safety: (1) that on weaknesses in the health care system (errors and failures, as well as inadequate system defenses) and (2) that on the strengths of the health care system (unplanned, informal recovery actions) which compensate for those weaknesses on a daily basis, often making the essential difference between harm and no harm to a patient. Informal recovery actions are similar to the characteristic strengths of a highly reliable organization or a culture of safety, as identified by Roberts (2002).
Health care is an example of a low-reliability system, where frquently all that stands between an adverse event and quality health care is the health care provider. Health care professionals are continually detecting, arresting, and deflecting potential adverse events, sometimes even subconsciously. Data on recovery processes represent valuable patient safety information, a fact that often goes unrecognized.
The remainder of this chapter makes the case for the importance of near-miss reporting and analysis. The next two sections outline, respectively, the fundamental aspects and functional requirements of near-miss systems. Implementation and operational considerations are then reviewed. Next, a general framework is presented for processing near-miss reports and briefly address gaps between ideal and current systems. The final section describes the implications of the preceding discussion for data standards.
THE IMPORTANCE OF NEAR-MISS REPORTING AND ANALYSIS
The committee believes near-miss data should be analyzed more extensively than they currently are. Such analysis provides opportunities for learning about both weaknesses in the health care delivery system and ways in which the system is able to recover from dangerous or risky situations.
Three Goals for Near-Miss Systems
In an overview of near-miss systems in the industrial and transportation domains, Van der Schaaf et al. (1991) distinguish three different goals1 of near-miss reporting and analysis:
- Modeling—to gain a qualitative insight into how (small) failures or errors develop into near misses and sometimes into adverse events. Eventually this insight should make it possible to identify the set of factors leading to the initial failures, as well as those enabling/promoting timely and successful recovery. As compared with adverse events, the added advantage of the recovery component should enable a more balanced view of how patient safety can be improved, focused not only on preventative measures to address the failure factors identified but also on means of building in or strengthening the recovery factors that come into play once errors have occurred.
- Trending—to gain a quantitative insight into the relative distribution of failure and recovery factors by building a database of underlying root causes of a large number of near misses. This database allows trending of the relative frequency of the various factors over time and thus provides a way to prioritize the most prominent factors as possible targets for error-reduction or recovery promotion interventions. Near misses, being 7–100 times more frequent than adverse events (Bates et al., 1995a; Bird and Loftus, 1976; Heinrich, 1931; Skiba, 1985), allow for a much faster buildup of such databases, even at the lowest levels of a national reporting system (e.g., a single hospital department, a primary care provider's practice). Although To Err Is Human (Institute of Medicine, 2000) estimates the numbers of adverse events and associated fatalities to be very large nationwide, they are still infrequent at the lowest levels of the health care system and thus offer little insight into fundamental, frequently recurring underlying system factors on which to base the most efficacious safety improvements.
- Mindfulness (Kaplan, 2002)/alertness—to maintain a certain level of alertness to danger, especially when the rates of actual injuries are already low within an organization. For those employed in work environments with a mature safety culture, it eventually becomes difficult to maintain a minimum level of risk awareness in the absence of clearly visible adverse events. A weekly or monthly reminder in the form of a near miss in that same work situation may serve to reinforce awareness of specific safety risks that continue to exist, as well as demonstrate informal recovery defenses in action. It may be necessary to publicize the details of such near misses to ensure that all front-line workers are alerted to the continuing risks.
The Causal Continuum Assumption
Since the 1930s (Heinrich, 1931), most safety experts have assumed (based on anecdotal evidence) or claimed that the causal factors of consequential accidents are similar to those of nonconsequential incidents or near misses. Yet this so-called causal continuum assumption has not yet been firmly established as a scientific fact in health care. To date, this relationship has been documented only in recent transportation safety research (Wright, 2002). The pattern of failure factors for near misses in the railway sector was, by and large, not statistically different from that for train accidents involving injuries and damages. The claim in the health care domain that addressing the causes of near misses will also aid in preventing actual adverse events and fatalities will have to based on more than anecdotal evidence if that claim is to be widely accepted and therefore worth acting upon.
Currently available databases could be used to test the causal continuum assumption in health care. In fact, in one study that evaluated this assumption in health care, the characteristics of near misses were found to be somewhat different from those of errors that resulted in harm (Bates et al., 1995a). In particular, for medication errors, near misses involving a modest overdose were more likely to result in harm than errors involving massive overdoses since the former actions were more likely to be carried out. However, the underlying causes of near misses and adverse events (a lack of medication knowledge) were similar.
The Dual Pathway
One aspect of near-miss versus adverse event reporting that is relatively unknown but highly valued in practice is that near-miss reporting provides a dual pathway to improved system performance:
- The direct, analytical pathway, which near-miss and adverse event systems have in common, is based on collecting incident data; analyzing root causes; and acting upon the most important causes, thereby gradually improving the system and achieving better (safety) performance.
- In addition, near-miss systems appear to offer a second, indirect, cultural pathway to better performance: when reporters increasingly learn to trust the near-miss system as a means for communicating about and gradually improving patient safety, each voluntary decision on their part to report another near miss (instead of keeping it to themselves) helps change their attitude and ultimately their behavior as well, again leading to better performance. This slower, less visible, but fundamental and long-lasting cultural pathway is even regarded by some health care managers as more valuable in the long term than the straightforward analytical path (Joustra, 2003).
The Role of the Patient
As stated above, the dependence of near-miss systems on (voluntary) reporting by health care staff affects staff attitudes much more profoundly than is the case with systems not dependent on such personal commitment. However, playing an active role in detecting risks to patient safety is not necessarily limited to staff; patients themselves may be put in a position to contribute, for example, by being encouraged to ask questions about their care. In some cases, patients may help monitor their daily medications or medical treatment procedures, provided this information is supplied to them in an accessible format.
In this sense, patients (and by extension their family and friends) may be viewed as an extra, highly motivated line of defense. At the same time, involving patients in monitoring their own care clearly must be approached with caution and must be additional to, not a substitute for, the monitoring provided by systems and individual caregivers. Where patients provide an additional layer of monitoring, there could be a tendency in rushed circumstances to place total reliance on this mechanism. Moreover, many patients may be unable to contribute anything toward monitoring their own care because they lack the required information or have impaired cognitive or sensory skills.
In general, however, patients (and their family and friends) are a vastly underutilized resource for identifying things that go wrong in health care. Where possible, they should be encouraged to report incidents, especially those in which they averted potentially harmful consequences (e.g., by refusing to accept pills that differed in appearance or meals that did not conform to their dietary requirements).
FUNDAMENTAL ASPECTS OF NEAR-MISS SYSTEMS
To fulfill the goals outlined above, near-miss systems should be integrated into complete systems capable of capturing, analyzing, and disseminating information about patient safety. They should be able to support management decisions on how and where to invest in safety-oriented system improvements. They should describe the failure and recovery mechanisms behind the reported incidents; analyze the root causes of failures; and recommend specific actions, based on the root causes most prominent in the database, within a prioritization strategy. A complete system also entails covering the entire range of consequences, from very minor, easily corrected near misses to catastrophic adverse events and fatalities.
Learning from Databases, Not Just from Single Incidents
One of the consequences of the traditional focus on incidents in which patients were actually harmed in the belief that such incidents can yield more fundamental lessons is a lack of data at lower levels of the health care system. Rarely (if ever) do errors or failures end up causing severe damage to a patient in any single hospital department or primary care practice. Inevitably, such occasions receive a great deal of media attention. The result is often massive investments designed to prevent such (possibly very rare) mishaps from recurring, at least in part because of the attention they attract and the desire of hospital managers to be seen as acting swiftly. Because of the salience of the outcome, the analysis is subject to hindsight bias.
An incident-by-incident learning mode is reactive, based on specific characteristics of single events, and in most organizations consumes a major portion, if not the entirety, of the budget available for improving the system. An alternative proactive learning approach (Reason, 1990), at least with regard to adverse events and fatalities, is to collect data on large numbers of events; analyze the root causes; build a database of these causes; and then act upon the underlying patterns of causes, which are much more likely than single events to point to systemic or latent (Reason, 1990) problems. Indeed, some systemic or latent causes that can be uncovered through aggregate databases can be identified not at all, or not as efficiently, by analysis of single incidents. Given that the majority of adverse events occur infrequently, large incident databases may be necessary to provide sufficient examples for purposes of analyzing rare events such as gas embolism or anaphylaxis.
Need for Root-Cause Taxonomies
If one wants to rise above the level of single events and their causes and base interventions on the most frequent and important root causes found in large databases, a root-cause taxonomy is needed. The causal factors fed into these databases should be made comparable at a general, abstract level so that they are quantifiable. Various aspects of the event will require different (sub)taxonomies:
- Failure root causes require a generic, fixed taxonomy, which should be identical over all medical/health care domains so that the system can be optimized overall, rather than within each domain This taxonomy should also acknowledge that patients themselves sometimes contribute to near misses and adverse events.
- Recovery root causes require a similar taxonomy. This taxonomy is likely to overlap somewhat with the categories of the failure taxonomy but will differ in some respects because of the more complex recovery phases of detection, diagnosis, and correction, each with their specific enablers (Van der Schaaf and Kanse, 2000).
- Context variables, although not causal, provide additional useful background information, such as the who, what, when, where, and consequences of an event. Context variables may well be largely domain specific, allowing analysis tailored to a specific reporting system. There is considerable overlap in the context variables collected for near-miss and adverse event analysis.
- Free text encompasses the reporters' narratives on which the event analysis was originally based. These narratives should be stored with the analysis results, with consideration of requirements for deidentification, to allow for later, off-line analysis, especially by external researchers.
FUNCTIONAL REQUIREMENTS OF NEAR-MISS SYSTEMS
General Functional Specifications
Van der Schaaf (1992) outlines four essential characteristics of near-miss systems:
- Integration with other systems—Not only should a near-miss system contribute to and benefit from adverse event reporting systems, it should also be integrated, wherever possible, with other approaches used to measure, understand, and improve the performance of health care systems, such as audits of employee safety conducted by the National Institute for Occupational Safety and Health, total quality programs, environmental protection programs, maintenance optimization efforts, and logistics cost reduction programs.
- Comprehensive coverage (in a qualitative sense) of possible inputs and outputs—The system should be able to handle not only safety-related near misses but also events with actual adverse consequences and with a range of different types of consequences (i.e., quality-, environment-, reliability-, and cost-related). It should cover not only negative deviations from normal system performance (errors, failures, faults) but also positive deviations (successful recoveries). Finally, it should focus not only on human errors or technical failures as factors contributing to a near miss but also on underlying latent organizational/managerial causes.
- Model-based analysis—To the extent possible, a system model of health care work situations, including a suitable description of individual behaviors in a complex technical and organizational environment, should be the basis for the design of the information processing portion of the near-miss system. Effective handling of the data encompasses (1) the required input data elements (taken from free-text near-miss reports), (2) methods for analyzing a report to identify root causes, and (3) methods for interpreting the resulting database to generate suggestions to management for specific countermeasures.
- Organizational learning as the system's only focus, that is, the development of progressively better insight into system functioning—As discussed in Chapter 6, except for clear instances of willful criminal acts (which are unlikely to be managed through such channels), the output of a near-miss system should never lead to assigning blame to or punishing individual employees or even be used to evaluate them. Rather, the emphasis should be on learning how to continuously improve patient safety by building feedback loops into the near-miss system. At the individual level, organizational learning can be improved by staff education and learning.
Types and Levels
In designing a near-miss system, two important dimensions are the medical domain it will cover and the level (from local hospital department or primary care practice, to hospital, to nationwide) at which it will function. An example is shown in Table 7-1. The four cells in this table can be divided into three levels of complexity of a near-miss system:
- The basic level of the local, one-domain system (I)
- The intermediate level of the hospital-wide or the domain-specific nationwide system (II and III)
- The upper level of the nationwide system covering all domains (IV)
Ideally, the design of a near-miss system should progress from the lowest to the highest level of complexity. Doing so will ensure a continuous flow of voluntary reports, which can be expected to be produced mainly by the cell I systems; to be passed on to the aggregate intermediate-level systems; and finally to reach the highest, comprehensive level of cell IV. Continued willingness to provide such input will depend greatly on its direct effects on those reporting, that is, insight into their work situation with regard to patient safety, specifically for their single-domain department. Considering the need for root-cause taxonomies cited earlier, this approach to designing a near-miss system means that:
- To the extent possible, all of these types and levels should have identical causal taxonomies (for both failure and recovery factors) and identical free-text structures (for the original input narratives).
- Some basic context variables (i.e., those for type of patient, type of consequences) should also be fixed across levels and domains, while other specific context variables will vary with domain (i.e., type of treatment, medication, diagnosis) and/or level (i.e., codes for (sub)departments, protocols) to ensure enough specificity to provide useful and therefore motivational feedback.
As long as standard terminologies and taxonomies are used, data can be reported and acted upon at different levels of granularity. Coarser classification is necessary with the smaller collections available at the local level, but much finer granularity is possible when analyzing data from a large number of institutions. The strength of large-scale collections is that rare events can be well characterized.
IMPLEMENTATION AND OPERATIONAL CONSIDERATIONS
An overview of systems for the collection of human performance data in industry (Lucas, 1987) identifies five practical aspects that contribute significantly to such a system's success or failure and must be addressed when defining data standards:
- The nature of the information collected—It is obvious from arguments presented earlier in this chapter that descriptive reports are not sufficient; a causal analysis should be possible as well. A free-text description of an event will always be provided, sometimes guided by a standard set of questions (e.g., what the reporter was doing at the time, whether he/she was alone or with colleagues, what happened next, how the reporter reacted, whether there was a full recovery, what improvements the reporter would suggest).
- The use of information in the database—There should be regular and appropriate feedback to personnel at all levels. It should be easy to generate summary statistics and clear examples from the database and to identify specific error reduction and recovery promotion strategies that can be proposed to management.
- The level of help provided for collecting and analyzing the data—Analyst aids should be provided in the form of interview questions, flow charts, software, and the like.
- The nature of the organization of the reporting scheme—A local reporting system maintains close ties with reporters of events, but a central system may be more efficient in certain situations, for example, if there is widespread trust in the operation of the near-miss system. Probably for all near-miss programs, voluntary reporting is to be preferred over mandatory. Only in the case of certain well-defined, near-catastrophic events should there be a legal obligation to report.
- Whether the scheme is acceptable to all personnel—All of the above considerations should lead to a feeling of shared ownership. Whether the data are best gathered by a well-known colleague (most commonly in a local system) or by an unknown outsider (usually in a more central system) again depends on the specific situation. Everyone involved should at least be familiarized with the purpose and background of the reporting scheme.
Problems of Data Collection
The following specific problems involved in data collection (Lucas, 1987) must be addressed to achieve a successful near-miss system:
- Action oriented—a tendency to focus on what rather than why.
- Event focused—analyzing individual incidents rather than looking for general patterns of causes in a large database. The result is anecdotal reporting systems.
- Consequence driven—making the amount of attention and the resources devoted to investigation directly proportional to the severity of the outcome.
- Technical myopia—a bias toward hardware rather than human failures.
- Variable quality—both within and between reporting systems, leading to incomparable investigation methods and results.
Key Issues: Willingness to Report, Trust, and Acceptance
Although the above discussion stems from experiences in (high-tech) industries and date from 1987, by and large they still hold today and for health care as well. Here we focus on those aspects most relevant to the key issues in near-miss systems for health care—willingness to report, trust, and acceptance:
- Input optimal in terms of both quantity and quality may be facilitated by providing multiple channels for reporting, including forms, computer linkup, and telephone; at multiple locations, including the nurses' station, the doctors' meeting room, from the patient's bedside, and from home; by multiple groups, not just medical staff but also lab technicians, administrative employees, patients themselves, and their relatives/visitors; and at all times during the day/shift.
- The reporting threshold (i.e., the difficulty and effort involved in making a near-miss report) should be minimal. A simple form should be used with just a few questions (who is reporting, how he/she can be reached for further information, what happened, why the reporter thinks it happened this way, how bad the outcome could have been if recovery had not occurred), taking not more than a few minutes to complete.
- The opportunity, importance, and procedures of contributing to patient safety by voluntary reporting should be well known to all target groups. To this end, substantial investments must be made in publicizing, explaining, and discussing these issues before the formal launching of the near-miss system (i.e., opening of the reporting channels).
- Especially important is clear, continued, visible support by top management. Managers should be open and consistent in their communication about the importance, use, and accessibility of the data and their commitment to actually using the recommendations from the database analysis to choose, justify, and implement focused actions aimed at improving local performance on patient safety.
- Optimum investments in system change depend not only on the scientific aspects of the root-cause analysis method and other tools employed but also on the more practical aspects of their usability and clarity and the training and support provided to the staff designated to carry out these analyses. Variability among individual analysts in identifying and then assigning classification codes to root causes should be checked at regular intervals using interrater reliability trials (Wright, 2002).
- All of the above preparations and aspects should culminate in an optimal stream of frequent, meaningful, convincing, and therefore motivating feedback to all levels of staff and patients. Within 24 hours of a report being made, an acknowledgment of its receipt should be sent to the reporter, thanking him/her for the contribution and stating when (within days) a request for further information required for a complete analysis might be expected. If prioritization requires a full root-cause analysis, the descriptive portion of the analysis (not the classifications themselves) should be fed back to the reporter for validation. After prioritization and analysis at the database level (i.e., every 2 or 3 months), the resulting insights and suggestions for focused action should be fed back, combined with the justified choice by management of where and how to concentrate resources for improvement. These visible changes in the system will serve as a major motivator, as will evaluation of their effects in a later phase.
Keeping It Manageable
Instituting and running a near-miss system should not burden an organization unduly. As noted in Chapter 6, automated surveillance systems, augmented by other detection methods, will increase the number of detected adverse events that might warrant further analysis. Since near misses occur much more frequently than adverse events (Bates et al., 1995b), an organization could become overwhelmed by the number of near misses that might warrant further analysis. Once a near-miss system has been functioning for a while, it is crucial to establish selection criteria that can identify a manageable number of reported events with enough learning potential to warrant full root-cause analyses.
In addition to the criteria mentioned in Chapter 6, likely candidates would include the novelty or surprise factor—new elements not seen before, even considered impossible. Another criterion could be potential fatal consequences or the realization that this event must have been latent in the organization for a long time, passing through many barriers that should have caught it earlier. Also, when an event is one that should have been prevented by a recent focused intervention, one would like to know why it still occurred.
Finally, an organization may have selected a certain type of medical event (such as wrong-side surgery or switching of patients' identities) as a topic of special concern for a limited period; in that case, it might prefer to select all such reports for full analysis until the end of the project.
Integration with Adverse Event Systems
Near misses are regarded as being on the same continuum as adverse events in terms of failure factors but differing in terms of the additional information they provide on recovery factors and in their significantly higher frequency of occurrence. Because the assumption of the causal continuum implies that the causes of near misses do not differ from those of adverse events, this leads to the claim that near misses are truly precursors to later potential adverse events and therefore valuable to report. The primary focus for improving patient safety is on identifying and eliminating the system faults that can lead to adverse events. This objective can be approached by analyzing both adverse events and near misses to identify the system faults involved.
A direct causal comparison between near misses and adverse events requires shared taxonomies for sets of events in terms of both root causes and context variables. After enough adverse events or other serious medical mishaps have been reported and analyzed to build a statistically sound database for a health care organization, the amount of overlap between the causes of near misses and adverse events should be examined. Doing so will not only clarify the relationship between these two sets of events but also demonstrate clearly and convincingly to all potential reporters the importance of near-miss systems. In some cases, adverse event descriptions also encompass recovery actions that were obviously too late, too weak, or of the wrong type to have been successful. In these cases, such failed opportunities at recovery, or at least damage limitation, can be classified using the taxonomy for near-miss recovery factors and compared with successful recoveries to understand the predictors of success.
GENERAL FRAMEWORK FOR PROCESSING NEAR-MISS REPORTS
Summarizing the main points for designing, implementing, and operating a near-miss system, Table 7-2 uses a seven-module framework to describe what is required in each step of the processing of near-miss reports (Van der Schaaf et al., 1991):
- 1.
Detection—This module contains the registration mechanism, aiming at easy entry of complete (or at least nonbiased), valid reporting2 of all near-miss situations detectable by employees, patients, and others.
- 2.
Selection—A mature near-miss system will probably generate many duplications of earlier reports, increasing the workload of the safety staff coping with sizable piles of reports. To maximize the learning process using limited resources, a selection procedure is necessary to filter out the most interesting reports for further analysis in the subsequent modules.
- 3.
Description—Any report selected for further processing should lead to a detailed, complete, neutral description of the course of events and situations resulting in the reported near miss, with appropriate deidentification. These causal elements should be shown in their logical order (what caused what) as well as their chronological sequence (e.g., using causal-tree techniques).
- 4.
Classification—As the most fundamental of causal elements, root causes should each be classified according to a suitable taxonomy. In this way, the fact that every incident usually has multiple causes is fully recognized, and each analyzed near miss thus adds a set of root causes to the database. Severity should also be assessed.
- 5.
Computation—In exceptional cases only (e.g., on first discovering a technical design fault or a new side effect of a drug), immediate action is required. Generally, however, the database is allowed to build up gradually over a certain period, after which a periodic statistical analysis of the entire or the most recent part of the database is performed, with the aim of identifying patterns of root causes instead of unique, nonrecurring symptoms.
- 6.
Interpretation and implementation—Once the most dominant causes have been identified, a mechanism should be in place that suggests types of interventions that may influence these causes by preventing them in the case of failure factors or promoting them in the case of recovery factors. Management can then select one or more focus areas on the basis of these model-based options for intervention and other dimensions, such as time to effect, cost, and regulator requirements. The associated interventions can then be implemented.
- 7.
Evaluation—Once the selected interventions have had some time to take effect, they should be monitored for their effectiveness in bringing about the expected change. Subsequent periodic database analyses should be used for this purpose by checking for decreased (for failure factors) or increased (for recovery factors) presence in the near-miss reports generated after implementation. Such system feedback is essential for establishing a learning cycle.
GAPS BETWEEN IDEAL AND CURRENT SYSTEMS
A comparison of the requirements for designing and implementing near-miss systems (as summarized in the seven-module framework presented in Table 7-2) and the actual operational experience with the few existing near-miss systems reveals a number of gaps between ideal and current systems. Given that near-miss reporting and analysis is a new and evolving area, pilot testing of the principles set forth in this chapter is essential. In addition, a solid research program should be undertaken to quantify the benefits and costs of near-miss reporting and analysis. Chapter 5 details the committee's proposals for a research program.
IMPLICATIONS FOR DATA STANDARDS
The following subsections summarize the implications of the above discussion for the development of standards for data related to patient safety (excluding the research outlined in Chapter 5).
Definitions and Models
Clear, workable definitions and models should be formulated for all system and data elements necessary for collecting, analyzing, and learning from near-miss events, as well as for sharing these data and analysis results within and among all levels and domains of the health care system. Care should be taken to ensure maximum overlap between such near-miss standards and those for adverse events. Where possible, tested definitions and models from both within and outside the medical field should be preferred. The various possible goals of near-miss systems should be reflected in these definitions and models, as well as potential roles of patients and their relatives.
Taxonomies
Classification systems for root causes (both failure and recovery) and context variables are essential for aggregating and comparing near-miss data. Failure taxonomies should allow a balanced, unbiased analysis of the human, technical, and organizational causes involved in an event. Recovery taxonomies will need further development but should at least distinguish among the detection, diagnosis, and correction phases of the recovery process. Contextual variables should be shared where possible among all domains and levels and remain specific as necessary to furnish enough detail within a certain domain or level to provide useful feedback and lessons for local improvement.
Design and Operation of System Components
Following the seven-module framework outlined above, system design and operation standards should address the following issues:
- Detection—Reporting should be as easy and quick as possible, through multiple channels, for medical staff, patients, and others present. The low threshold means that in this first module a report cannot be anonymous, as additional information may be required from the reporter. The report should, however, be strictly confidential at this phase.
- Selection—Predictably large numbers of incoming reports should be evaluated for their learning potential to determine whether root-cause analysis will be worthwhile; criteria for selection are essential to prevent the near-miss system from being flooded and should be specific for the local and national levels.
- Description—A concise description of all relevant elements, from root causes to the reported event, in their chronological and logical (i.e., cause–effect) order demands tree-like techniques. For near-miss data, these techniques should allow/be adapted for describing recovery elements as well as failure elements. After any additional information needed to complete and validate the event description has been furnished, the reporter's name is no longer needed and should be deleted, along with other possible identifiers.
- Classification—Identified failure and recovery factors and context variables require a set of transferable, learnable (and therefore relatively simple) taxonomies based on accepted safety management models and local/domain needs.
- Computation—At higher levels especially, near-miss database structures should allow for large numbers of coded events, easy queries, data mining, and state-of-the-art statistical analysis. At lower local levels, ease of use and preprogrammed recurring analyses for feedback to the reporting community are essential as well.
- Interpretation and implementation—Targeted (dominant) root causes should be linked to suggestions for methods of addressing them. The matrix should be based on accepted safety management models. Management should be supplied with this advice in a form that supports optimal decision making on the allocation of resources to patient safety improvement actions and then monitored with regard to whether these improvement programs have been implemented.
- Evaluation—It is essential that the effects of implemented programs be monitored. Monitoring not only allows for the establishment of a learning cycle (whether the right action was taken on that problem) but also provides highly motivating feedback to all (potential) reporters, who can then see for themselves how their contributions to the database help increase patient safety.
REFERENCES
- Bates, D. W., and A. A. Gawande. 2003. Improving safety with information technology. N Engl J Med 348 (25):2526–2534. [PubMed: 12815139]
- Bates, D. W., D. L. Boyle, M. B. Vander Vliet, J. Schneider, and L. Leape. 1995. a. Relationship between medication errors and adverse drug events. J Gen Intern Med 10 (4):199–205. [PubMed: 7790981]
- Bates, D. W., D. J. Cullen, N. Laird, L. A. Petersen, S. D. Small, D. Servi, G. Laffel, B. J. Sweitzer, B. F. Shea, R. Hallisey, M. Vander Vliet, R. Nemeskal, and L. L. Leape. 1995. b. Incidence of adverse drug events and potential adverse drug events: Implications for prevention. JAMA 274 (1):29–34. [PubMed: 7791255]
- Bates, D. W., L. L. Leape, D. J. Cullen, N. Laird, L. A. Petersen, J. M. Teich, E. Burdick, M. Hickey, S. Kleefield, B. Shea, M. Vander Vliet, and D. L. Seger. 1998. Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. JAMA 280 (15):1311–1316. [PubMed: 9794308]
- Bird, F. E., and R. G. Loftus. 1976. Loss Control Management . Loganville, GA: Institute Press.
- Department of Veterans Affairs. 2002. Veterans Health Administration (VHA) National Patient Safety Improvement Handbook . Washington, DC: U.S. Department of Veterans Affairs.
- Heinrich, H. W. 1931. Industrial Accident Prevention . New York, NY: McGraw-Hill.
- Institute of Medicine. 2000. To Err Is Human: Building a Safer Health System . Washington, DC: National Academy Press. [PubMed: 25077248]
- Jha, A. K., G. J. Kuperman, J. M. Teich, L. Leape, B. Shea, E. Rittenberg, E. Burdick, D. L. Seger, M. Vander Vliet, and D. W. Bates. 1998. Identifying adverse drug events: Development of a computer-based monitor and comparison with chart review and stimulated voluntary report. J Am Med Inform Assoc 5 (3):305–314. [PMC free article: PMC61304] [PubMed: 9609500]
- Joustra, A. C. 2003. Concept of Dual Pathways . Personal communication to Institute of Medicine's Committee on Data Standards for Patient Safety.
- Kaplan, H. 2002. Alertness to Danger When Rates of Injury Are Low . Personal communication to Institute of Medicine's Committee on Data Standards for Patient Safety.
- Lucas, D. A. 1987. Human performance data collection in industrial systems. In: Human Reliability in Nuclear Power. London: IBC Technical Services.
- Perrow, C. 1984. Normal Accidents: Living with High-Risk Technologies . New York, NY: Basic Books.
- Reason, J. 1990. Human Error . Cambridge, UK: Cambridge University Press.
- Roberts, K. H. 2002. Highly Reliable Systems . Presentation to IOM Committee on Data Standards for Patient Safety on September 23, 2002. Online. Available: http://www
.iom.edu/includes/DBFile .asp?id=10916 [accessed February 6, 2004]. - Skiba, R. 1985. Taschenbuch Arbeitssicherheit (Occupational Safety Pocket Book) . Bielefeld, Germany: Erich Schmid Verlag.
- Van der Schaaf, T. W. 1992. Near Miss Reporting in the Chemical Process Industry . Eindhoven, Netherlands: Technische Universiteit Eindhoven.
- Van der Schaaf, T. W., and L. Kanse. 2000. Errors and Error Recovery. Pp. 27-38 Human Error in System Design and Management (Lecture Notes in Control and Information Sciences, 253). eds. P. F. Elzer, editor; , R. H. Kluwe, editor; , and B. Boussoffara, editor. . London, England: Springer Verlag.
- Van der Schaaf, T. W., D. A. Lucas, and A. R. Hale. 1991. Near Miss Reporting as a Safety Tool. Oxford: Butterworth-Heinemann.
- Wright, L. B. 2002. The Analysis of UK Railway Accidents and Incidents: A Comparison of Their Causal Patterns. Glasgow: University of Strathclyde.
Footnotes
- 1
The first two goals, modeling and trending, are also applicable to adverse event systems (see Chapter 6).
- 2
Computerized detection using a signal approach has not been as effective for detecting near misses as for detecting adverse events (Jha et al., 1998). Increasingly, however, new technologies such as computerized order entry (Bates et al., 1998) and “smart” intravenous pumps that record exactly what an operator tried to do (Bates and Gawande, 2003) will become useful as sources of data on large numbers of near misses.
- CHAPTER SUMMARY
- THE IMPORTANCE OF NEAR-MISS REPORTING AND ANALYSIS
- FUNDAMENTAL ASPECTS OF NEAR-MISS SYSTEMS
- FUNCTIONAL REQUIREMENTS OF NEAR-MISS SYSTEMS
- IMPLEMENTATION AND OPERATIONAL CONSIDERATIONS
- GENERAL FRAMEWORK FOR PROCESSING NEAR-MISS REPORTS
- GAPS BETWEEN IDEAL AND CURRENT SYSTEMS
- IMPLICATIONS FOR DATA STANDARDS
- REFERENCES
- Near-Miss Analysis - Patient SafetyNear-Miss Analysis - Patient Safety
- Global Burden of Mental, Neurological, and Substance Use Disorders: An Analysis ...Global Burden of Mental, Neurological, and Substance Use Disorders: An Analysis from the Global Burden of Disease Study 2010 - Mental, Neurological, and Substance Use Disorders
Your browsing activity is empty.
Activity recording is turned off.
See more...