U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Henriksen K, Battles JB, Marks ES, et al., editors. Advances in Patient Safety: From Research to Implementation (Volume 2: Concepts and Methodology). Rockville (MD): Agency for Healthcare Research and Quality (US); 2005 Feb.

Cover of Advances in Patient Safety: From Research to Implementation (Volume 2: Concepts and Methodology)

Advances in Patient Safety: From Research to Implementation (Volume 2: Concepts and Methodology).

Show details

Diagnostic Failure: A Cognitive and Affective Approach

.

Author Information and Affiliations

Abstract

Diagnosis is the foundation of medicine. Effective treatment cannot begin until an accurate diagnosis has been made. Diagnostic reasoning is a critical aspect of clinical performance. It is vulnerable to a variety of failings, the most prevalent arising through cognitive and affective influences. The impact of diagnostic failure on patient safety does not appear to have been fully recognized. Ideally, all information used in diagnostic reasoning is objective and all thinking is logical and valid, but these conditions are not always met. Two major phenomena that may undermine objectivity and rational thinking are cognitive dispositions to respond (CDRs) and affective dispositions to respond (ADRs) toward the patient. In this report, the determinants and characteristics of the major CDRs and ADRs are reviewed, as are a variety of de-biasing strategies that may mitigate their influence. A retrospective analytical process, the cognitive and affective autopsy, is also described. The purpose of this report is to provide insight into cognitive and affective influences that have resulted in delayed or missed diagnoses.

Background

The turn of the 20th century was a watershed in medical education reform. Students came to be regarded less as “memorizers” and more as “thinkers,” 1 and this signaled the beginning of cognitive science's influence on medical training. That influence has continued to grow. Medicine has assimilated many of the key constructs from the cognitive revolution that occurred in psychology over the last 30 years of the 20th century. Some are incorporated into the most recent of medical education models, the Clinical Presentation (CP) Curriculum, developed at the University of Calgary in the early 1990s. 2 This hierarchically structured schematic approach provides an overarching cognitive strategy for both learning and organizing knowledge, as well as a framework for establishing a differential diagnosis.

With disease, injury, or illness, the diagnosis is often obvious and may involve no more than simple pattern recognition. Where there is uncertainty, however, there is a need for clinical reasoning and decisionmaking; both of these processes show considerable vulnerability to error. Benchmark studies on medical error found diagnostic failure more common in the three disciplines in which diagnostic uncertainty appears to be the highest: internal, family, and emergency medicine. 3 5 Inevitably, these shortcomings are reflected in litigation. Two-thirds of completed claims against family practitioners in the United Kingdom were attributed to delays in diagnosis and treatment, 6 and “failure to diagnose” accounted for approximately half of all closed claims in U.S. emergency rooms. 7 The true incidence of diagnostic failure is difficult to assess for a variety of reasons: (1) illness subsides or is resolved with time, (2) patients seek care elsewhere, (3) patients fail to follow up, and (4) death. Unfortunately, the coroner's office does not routinely provide feedback on cases that it sees. 8 The gold standard of diagnosis—the clinical autopsy—has consistently yielded an antemortem misdiagnosis rate of about 40 percent over the past 65 years. 9 11 In approximately one-third of cases, the autopsy would not have taken place had the correct diagnosis been known and appropriate treatment given.

Categorization of diagnostic error

Historically, diagnostic error was seen at an individual level as a failure of the physician. Good diagnostic skills in physicians were equated with clinical acumen and expertise. As Nuland notes, “It is every doctor's measure of his own abilities; it is the most important ingredient in his professional self-image.” 12 Two books published in 1991 placed their major focus on the cognitive “calibration” of the physician as diagnostician. The first, an insightful and underappreciated work by Riegelman, anticipated many of the major themes in the current debate on medical error. 13 He described two basic error types: errors of ignorance and errors of implementation. Errors of ignorance are due to inadequate knowledge, whereas errors of implementation occur during application of knowledge. Riegelman's emphasis was on the latter, and he detailed how multiple steps in this process may be adulterated to cause error. The second, Kassirer and Kopelman's Learning Clinical Reasoning, 14 also drew heavily on the work of the cognitive scientists, applying the new cognitive theory within a rich context of case material and describing a number of the more common cognitive biases in a clinical domain. It was a pivotal work in the development of a “clinical cognition” approach and in the understanding of the cognitive behavior of physicians.

More so perhaps than any other document, the Institute of Medicine's (IOM's) 2000 report, To Err Is Human: Building a Safer Health System, 15 successfully shifted the emphasis from the individual to the system, thus opening the way for a more comprehensive taxonomy of diagnostic error. Graber et al. 16 proposed categorizing diagnostic error into three major groupings: no-fault, systemic, and cognitive. No-fault diagnostic errors occur when the disease is silent, appears in an atypical fashion, or mimics another, more common disease. Systemic errors are those that could be attributed to system failures—and are the most prevalent. It could be argued that virtually all diagnostic error is due to system failure, if one accepts that training and development of physicians is a systems problem, or that human-factors engineering issues often contribute. 17 However, these arguments detract from the individual characteristics of human performance that clearly influence cognitive and affective behavior and decisionmaking. Affective error may have a widespread and far-reaching impact on decisionmaking, and has, thus far, received insufficient attention. Cognitive error includes both errors of ignorance and implementation. For the most part, physicians' knowledge deficits are relatively uncommon, and it is rare that a physician would be unaware of a particular diagnostic entity except in the most esoteric of cases. Unhappily, it is less rare that a physician will fail to consider a particular diagnosis.

Cognitive dispositions to respond (CDRs)

In many cases, diagnoses are clear and unambiguous (e.g., sprain, fracture, dislocation, otitis, cellulitis, foreign body, and others.) Under conditions of uncertainty, however, the manner in which a physician orientates to the presenting complaints, symptoms, and signs of a particular patient is determined by a variety of factors (Figure 1). The summation of their effects leads to at least one, but more often an aggregate of cognitive dispositions to respond (CDRs), 18, 19 reflected in a final common cognitive pathway and action. Many—perhaps the majority—of CDRs will be appropriate and lead to the successful attainment of a diagnosis, but some will not.

Figure 1. CDR determining factors and actions.

Figure 1

CDR determining factors and actions.

The term CDR was developed to describe a mental state that embraced a variety of terms, often with a negative connotation (e.g., heuristics, biases, sanctions, fallacies, and errors), that had been described in the psychology and medicine literature over the past 40 years. Heuristics are mental shortcuts used in clinical decisionmaking that often serve us well, but occasionally fail. Consider, for example, a 60-year-old male arriving in the emergency room (ER) with flank pain and hematuria. The fast and frugal approach to his complaint involves primarily a pattern recognition heuristic that leads to a diagnosis of renal colic. This may well be correct. Occasionally, however, it will be a dissecting abdominal aortic aneurysm, and the heuristic will have catastrophically failed.

The word “bias” originally was used in the game of bowls to describe the deliberate weighting on one side of a bowl to give it a tendency to deviate from a straight line. Its use in the context of medical error carries more of a negative influence or prejudice (e.g., gender bias). Similarly, the term “error” for psychologists is a description of a piece of behavior that can be studied for its own sake in a nonjudgmental fashion, whereas others view it as a negative outcome for which someone should be blamed. “Fallacies” are arguments or beliefs that are misleading, or have deceptive appearances that may give rise to delusion; again, there is the notion that perception has failed.

To describe a clinician's heuristic as having failed, or to attribute bias, fallacy, and even error to their decisionmaking, is negative and often counterproductive. In contrast, the term CDR is not emotionally loaded. Its purpose is not to sanitize error; it is instead a neutral term that encourages a more analytical approach as to why an individual reached a particular decision for a given patient under certain conditions. More than 30 CDRs that may compromise patient safety are described in separate literature. 5 This compendium does not include additional categories of bias that typically arise in the course of diagnostic test evaluation: verification or work-up bias, diagnostic review bias, test review bias, and incorporation bias. 20

Generally, the action that results from CDRs takes one of three forms. The first, Flesh and Blood decisionmaking, occurs when “the cognitive reality departs from the formalized ideal.” James Reason used the expression to describe what often happens in the course of practical decisionmaking. 21 Clinicians do not take to reclining armchairs to cogitate and consider their options at length, but instead respond to omnipresent time pressures and resource availability with expeditious decision and action. To make a Flesh and Blood decision is to think on one's feet and go with clinical intuition. Those who can make good Flesh and Blood decisions use health care resources sparingly. The Casablanca Strategy 6 is a form of temporizing. The usual suspects are rounded up in the form of bloodwork or other tests, and additional time is gained for events to mature, decline, or otherwise declare themselves. Many illnesses are short-lived and can be reasonably treated with the “tincture of time” that often served our predecessors well. In the acute setting of an ER, the strategy serves a similar purpose; some diagnostic hypotheses will have been formed at the initial assessment, but the level of uncertainty is high and the degree of acuity virtually unknown. In contrast, in the Formal Work-Up, both the hypothesis context and acuity are clearer. The patient may “look sick” or there may be features of the presentation that mandate a search for significant or serious illness. The first response from the clinician may require an active intervention to stabilize the patient, but there follows a formal investigation with iterative hypothesis testing and refinement toward a definitive diagnosis. 14 The overall decision by the physician regarding which of the three decision modes to use ultimately reflects the quality of calibration. For expedient and safe care, the right decision style must be matched to the appropriate clinical situation. This progression from Flesh and Blood decisionmaking through Formal Work-Up corresponds to the “dynamic task” theory of Hammond. 22 He proposed that cognitive activity lies on a continuum from intuition to calculation, and that the nature of the task should determine the type of cognitive activity. Good calibration of judgment occurs when the task characteristics are appropriately matched to the cognitive activity.

Determinants of CDRs

There are a number of determinants of CDRs (Figure 1). They encompass most of the factors that lead to a particular CDR or, as is more likely, an aggregate of CDRs in clinical decisionmaking.

Ambient conditions

To view clinical decisionmaking through the eyes of the readers of medical journals is to see the medical setting as controlled, calm, objective, and predictable. Yet, clinical decisions are rarely made in such a context; the cognitive reality of the clinical workplace inevitably departs significantly from these formal, theoretical ideals. Nor does the legal process take into account the ambient conditions under which critical decisions might have been made. Instead, decisions are dissected in arbitrary, dispassionate isolation from their natural setting. It would be largely irrelevant for an emergency physician to plead extenuation on the basis that mayhem prevailed in the ER at the time the adverse event occurred. Yet this may well have been the case.

In industrial settings, SATO phenomena refer to the well-known trade-off (TO) between speed (S) and accuracy (A), 23 the faster the production line goes, the poorer the product quality. In medicine, the term RACQITO has been used to describe a similar trade-off, that which occurs between resource availability (RA) and continuous quality improvement (CQI). 24 Resource availability depends upon the clinical setting. In the ER it refers to bed availability, adequate staff, acceptable turnaround times for reports, and many other factors, including the cognitive resources available to maintain quality in decisionmaking.

Past experience

Experience is an important determinant of the caliber of decisionmaking. In general, accumulated experience brings with it expertise. It is estimated that becoming an expert in any field takes approximately 10 years. 25 Past experience with a particular clinical problem may also determine the clinician's reaction to it. The availability heuristic 26 predicts that recent experience with a particular diagnosis increases the likelihood that it will be made again, whereas if it hasn't been seen for some time (out of sight, out of mind 21 ) the likelihood decreases. An exception may occur if a physician experienced a particularly bad outcome with a missed diagnosis. The emotional valence that charges this experience may ensure that the diagnosis will never be forgotten.

Impact of affective state

Physicians and caregivers are just as vulnerable to mood alterations as anyone else, yet the impact of affective state on decisionmaking has gained little attention to date. The full range of affective disorders would be expected, as would various emotional dysregulatory influences that might uniquely affect a caregiving role. These various influences may be collectively referred to as affective dispositions to respond (ADRs). They may compromise cognitive control and therefore the efficacy of clinical decisionmaking (Table 1).

Table 1. Sources of affective dispositions to respond (ADRs)†.

Table 1

Sources of affective dispositions to respond (ADRs)†.

Affect can be influenced by a variety of ambient, chronobiological, and other variables. Changing conditions and interpersonal conflict in the workplace may lead to temporary or ongoing changes in affective state. Stress and fatigue are well known to produce irritability, intolerance, and other mood changes that will also exert an influence on judgment. Temperament, activity level, motivation, and other variables that may affect clinical performance are influenced by the diurnal phase in some individuals more than others. 27, 28 Premenopausal women may be subject to infradian variations in mood. 29 Seasonal influences, such as the absence of sunlight, may exert a negative influence on affective state. 30 The circadian dys-synchronicity that results from shiftwork, common among health care providers, is responsible for a variety of negative effects, 31 and “shiftwork intolerance syndrome” is associated with depression-like symptoms. 32 Disruptions in social life, such as family problems, marital discord, divorce, loss of a loved one, ill health, and others would all be expected to result in temporary or prolonged disturbances of affect.

Affective influences on decisionmaking

It is important to note, at the outset, that affect is inseparable from thinking. 33 Emotional intelligence is an integral part of our ability to process information meaningfully and make good decisions. 34 Correspondingly, there are several affective “biases” and factors that influence decisionmaking. Some originate from general human dispositions, e.g., the desire for good outcomes and the avoidance of misery; others are more specific and are determined by an individual patient, or by the characteristics of groups of patients as occur in stereotyping.

The chagrin factor, 35 outcome bias, 36 and value bias. 37, 38 These refer to ways in which physicians' preferences, such as the anticipated “goodness” of the outcome (what they hope will happen), or anticipated failure (what they fear might happen), may all influence the particular decision that is made.

Patient factors. One of the most powerful patient factors that might influence a physician's decisionmaking is countertransference, a type of affective bias. It originally referred to the feelings a therapist might develop toward a patient, but now is used in a much broader context, extending to social cognition. 39 Other patient behaviors may elicit positive or negative responses from a physician that need not necessarily have their basis in countertransference, 40, 41 and the particular response may itself be influenced by the gender of the physician. 41 A variety of patient characteristics, such as gender, 42 race, 43 and others, 41 also have been demonstrated to influence clinical decisionmaking and care.

Team factors. In recent years, an increasing emphasis has been placed on team approaches to health care. While responsibility for the final medical diagnosis usually rests with the physician, there is a greater awareness of the various contributions to the process that may be made by other members of the team (e.g., nurses, EMTs). This can present significant advantages if the team is working well together; judicious delegation of the cognitive workload may reduce the cognitive burden on particular individuals. The MedTeams™ approach transferred lessons learned in aviation Crew Resource Management (CRM) to emergency medicine to show that many teamwork errors, including diagnostic errors, could be mitigated through appropriate teamwork training. 44 This may result in a distributed “shared” cognition that might be more effective than that of an individual working in isolation.

A downside, however, is that if the team is not working well, an already vulnerable process may be exacerbated. Coercive pressures may drive a particular agenda that is not congruent with good decisionmaking. For example, team pressures to lower door-to-needle times for thrombolytic therapy in the context of suspected acute myocardial infarct may inadvertently lower the threshold of safety if other diagnoses (aortic dissection, pericarditis) have not received full consideration. Other factors—low morale and normalization of deviance, 45 the accumulated tolerance of unsafe conditions that develops over time and ultimately compromises patient safety—may reduce team performance. Finally, the phenomenon of “group think” can lead to a variety of behaviors in team members that are counterproductive to good decisionmaking. 46

Violation-producing factors. Following the publication of IOM's report, 15 there was a palpable shift away from blaming individuals for errors to looking at the system in which the individuals worked. It seems, in the process, that virtually all attention has been diverted from human behavior. Yet, human behavior is infinitely variable and it is clear that if all systemic variables were held constant different decisions would be made by different individuals. Work in other domains shows that, aside from willful negligence, drug addictions, malevolent or other egregious acts, some individuals may engage in a variety of violation-producing behaviors; some determinants and behaviors are listed in Table 2.

Table 2. Violation-producing factors and behaviors*.

Table 2

Violation-producing factors and behaviors*.

Gender and psychological differences may underlie some safety violations; males are more likely to break safety rules 47 and engage in more risk-taking behavior than females. 50 Risk attitudes appear to correlate with critical aspects of decisionmaking in the ER. 51 “Normalization of deviance” 45 and authority gradients 52 —both within and between professional groups—may have adverse effects on team function and lead to poor outcomes. Human characteristics, such as personality variables and others that may be hard-wired, 19, 53 can also influence clinical decisionmaking and patient outcomes. It is likely that such violation-producing characteristics of physicians and nurses may be exacerbated under RACQITO conditions.

Fatigue, sleep deprivation, and sleep-debt. The brain functions at its best when it is well rested. Fatigue may occur independently of sleep deprivation and sleep debt, but these invariably lead to fatigue. Optimal perception, attention, vigilance, memory, and reasoning all depend on being well-rested and having an adequate amount of sleep. 54 Yet, long hours of work, sleep deprivation, and an accumulated sleep debt are common in the medical workplace. Generally, the longer people stay awake, the sleepier they become, and the more their cognitive and psychomotor performance is impaired. 54 Clinical decisionmaking reaches its nadir at about 3–4 a.m.; cognitive performance at this time is equivalent to being legally intoxicated. 55 Landrigan et al. recently described the association between sleep deprivation and medical error; 56 not surprisingly, sleep deprivation was found to have a far greater impact on diagnostic errors than on other types of error usually associated with less knowledge-based thinking (medication error, procedural error, and others). Finally, the disruption of circadian rhythms that results from shiftwork may lead to mood disturbances, which, in turn, may further affect the calibration of decisionmaking. Many of these effects are exacerbated by aging. 57

Strategies to overcome CDRs

Any purposeful interaction with a patient will lead a physician toward some aggregate cognitive disposition to respond, and usually toward the appropriate diagnosis; the emphasis here is on those that do not. It should follow that if we are able to delineate the principal factors that lead to those CDRs that may cause diagnostic failure, there should be strategies to avoid them. Historically, there has been considerable inertia against overcoming such biases. 19, 58 Why such resistance has prevailed is unclear. There seems to be a sense that once a particular cognitive disposition has been established, it cannot be easily undone. Yet there are many examples in everyday life of people overcoming such biases and redirecting their thinking. Indeed, the development of expertise in medicine is, in part, associated with an increasing ability to avoid the pitfalls of the past. Clearly, we do learn from experience and there is abundant evidence of the plasticity of reasoning skills. Table 3 lists a variety of strategies that have been proposed to overcome CDRs associated with adverse outcomes.

Table 3. Cognitive and affective de-biasing strategies to reduce diagnostic error.

Table 3

Cognitive and affective de-biasing strategies to reduce diagnostic error.

The top half of the table includes particular cognitive strategies that can undo biases—the “cognitive pills for cognitive ills.” 59 Principle among them is the strategy of metacognition. This involves being able to step back from the immediate pull of the situation to momentarily reflect on what is going on. In human development, it is a feature of mental maturation, 60 and once adulthood is attained, it is the ability to disengage, reflect, and reconsider before action. The metacognitive step allows the application of specific cognitive and affective forcing strategies to avoid failure; 19, 24 simulation is a powerful technique for cognitive training. The lower half of the table includes specific strategies to minimize or eliminate cognitive and affective error by changing systemic features: decreased reliance on memory, minimization of time pressure and task-simplification all reduce cognitive loading; and clear accountability and improved feedback improve calibration of cognition.

Discussion

The cognitive and affective autopsy

When diagnoses fail, a physician's initial reaction is often one of surprise or shock, coupled with concern about the patient's well-being. This may then be followed by guilt, soul-searching, self-recrimination, and, in some cases, despair. The physician may become the second victim. 61 These reactions are somewhat unpropitious and maladaptive in those for whom a major goal is lifelong learning. An alternative approach is to perform a cognitive and affective autopsy, a form of cognitive and affective root cause analysis, as soon as possible after the event (Table 4). The physician should perform this autopsy when well-rested and after having an adequate amount of sleep. There is usually a rapid decay of detail—especially when the event has been an unpleasant experience—and, therefore, it is important to go through a process of active recall of every possible aspect of the case, however trivial they might appear. It is also important not to distort one's own perception of events nor deny one's feelings. Therefore, one's own recollections should be recorded first, followed by those of any others involved. Attention should be paid to ambient conditions and any other significant extraneous factors. It is important to review and understand the CDRs and ADRs, and to examine and apply them in the context of the particular case.

Table 4. Cognitive and affective autopsy.

Table 4

Cognitive and affective autopsy.

The process appears to work best when conducted as close to the occurrence as possible, while the case is still “hot” and before hindsight bias and selective reminiscence have gained momentum. The objective is to complete an informative account, subjectively rich in emotions and impressions, but also with as much objective detail as possible.62 However, cold cases may also be autopsied, providing there is sufficient and accurate detail.63 The principal advantage of the exercise is to provide meaningful, realistic self-feedback so that insight will develop, learning will occur, and clinical cognition change. This should reduce the likelihood that the event will be repeated.

A further opportunity is provided in the morbidity and mortality (M&M) conference. Originally designed “to provide a forum for physicians to confess their mistakes and help their colleagues avoid making similar ones,” these sessions may instead show group parallels to the individual response to failure—avoidance, blaming, and denial. 64 In a recent study, only about a quarter of sessions actually dealt directly with failure, 65 and it is rare to see insightful analysis of cognitive style—even more rare to hear any reference to the clinician's affective state—at M&M conferences. This should not be surprising as, historically; physicians have not received specific training in this regard. Yet, there are very few cases in which cognition and affect are not important factors in diagnosis and management, so this needs to be more explicitly recognized. M&M conferences should be restructured to promote discussion and analysis of thinking failures in a cognitive autopsy-style approach.

The cognitive and affective autopsy also has the potential as a useful teaching tool 62 and may be an important adjunct to clinical bedside teaching. It should be introduced into undergraduate and postgraduate curricula. In the past 3 years, both the undergraduate and postgraduate programs at Dalhousie University have introduced new courses on patient safety that use cognitive autopsy of case material, along with instruction in the use of cognitive de-biasing strategies and cognitive forcing strategies; it appears to be a popular and effective innovation with both undergraduates and postgraduates. 66 Simulation techniques have been developed to train residents to recognize CDRs that lead to diagnostic failure and in the application of cognitive forcing strategies to avoid them. 67

Conclusions

Although cognitive science has exerted a significant influence on academic medicine over the past 20–30 years, its full clinical impact has yet to be appreciated. Two major initiatives are required. First, practicing clinicians and clinical educators need to be aware of the impact of cognitive and affective influences on diagnostic reasoning. It is important that they understand the main CDRs and ADRs and their determining factors. A thorough knowledge of them will provide a language for describing both thinking and affective failures, and promote insight into those cases in which the diagnostic process has failed. Second, a working familiarity with all CDRs and ADRs must be established and incorporated into clinical teaching. Good diagnosticians are those who have learned, often to their chagrin, how to avoid these cognitive and affective pitfalls.

Acknowledgments

The administrative and secretarial assistance of Sherri Lamont at the Dartmouth General Hospital is gratefully acknowledged. The work was supported by a grant (#P20HS11592-02) from the Agency for Healthcare Research and Quality, and by a Clinical Research Fellowship from Dalhousie University, Halifax, Nova Scotia, Canada.

References

1.
Papa FJ, Harasym PH. Medical curriculum reform in North America, 1765 to the present: a cognitive science perspective. Acad Med. 1999;74:154–64. [PubMed: 10065057]
2.
Mandin H, Harasym PH, Eagle C. et al. Developing a “clinical presentation” curriculum at the University of Calgary. Acad Med. 1995;70:186–93. [PubMed: 7873005]
3.
Brennan TA, Leape LL, Laird NM. et al. Incidence of adverse events and negligence in hospitalized patients: results of the Harvard Medical Practice Study 1. N Eng J Med. 1991;324:370–6. [PubMed: 1987460]
4.
Wilson RM, Runciman WB, Gibberd RW. et al. The quality in Australian health care study. Med J Australia. 1995;163:458–71. [PubMed: 7476634]
5.
Thomas EJ, Studdert DM, Burstin HR. et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care. 2000;38:261–71. [PubMed: 10718351]
6.
Green S, Lee R, Moss J. Problems in general practice. Delays in diagnosis. Manchester, UK: Medical Defence Union; 1998.
7.
Data from the U.S General Accounting Office, the Ohio Hospital Association and the St. Paul Insurance Company. http://hookman​.com/mp9807.htm. St. Paul, MN: 1998.
8.
Croskerry P. The feedback sanction. Acad Emerg Med. 2000;7:1232–8. [PubMed: 11073471]
9.
Burton EC, Troxclair DA, Newman WP. Autopsy diagnoses and malignant neoplasms: how often are clinical diagnoses incorrect? JAMA. 1998;280:1245–48. [PubMed: 9786374]
10.
Nichols L, Aronica P, Babe C. Are autopsies obsolete? Am J Clin Path. 1996;110:210–8. [PubMed: 9704620]
11.
Zarbo FJ, Baker PB, Howanitz PJ. The autopsy as a performance measurement tool. Arch Path Lab Med. 1999;123:191–8. [PubMed: 10086506]
12.
Nuland SB. How we die: reflections on life's final chapter. New York: Alfred A Knopf; 1994.
13.
Riegelman RK. Minimizing medical mistakes: the art of medical decisionmaking. Boston: Little, Brown and Company; 1991.
14.
Kassirer JP, Kopelman RI. Learning clinical reasoning. Baltimore: Williams and Wilkins; 1991.
15.
Kohn LT, Corrigan JM, Donaldson MS, editors. To err is human: building a safer health system. A report of the Committee on Health Quality in America, Institute of Medicine. Washington, DC: National Academy Press; 2000. [PubMed: 25077248]
16.
Graber M, Gordon R, Franklin N. Reducing diagnostic errors in medicine: what's the goal? Acad Med. 2002;77:981–2. [PubMed: 12377672]
17.
Vicente K. The human factor: revolutionizing the way people live with technology. Toronto: Alfred A. Knopf Canada; 2003.
18.
Croskerry P. Achieving quality in clinical decisionmaking: cognitive strategies and detection of bias. Acad Emerg Med. 2002;9:1184–204. [PubMed: 12414468]
19.
Croskerry P. The importance of cognitive errors in diagnosis and strategies to prevent them. Acad Med. 2003;78:1–6. [PubMed: 12915363]
20.
Dawson NV. Physician judgment in clinical settings: methodological influences and cognitive performance. Clin Chem. 1993;39:1468–80. [PubMed: 8330409]
21.
Reason J. Human error. Cambridge, UK: Cambridge University Press; 1990.
22.
Hammond KR. Judgment and decision making in dynamic tasks. Information and Decision Technologies. 1988;14:3–14.
23.
Foley P, Murray, N. Sensation, perception, and systems design. In: Salvendy G, editor. Handbook of human factors and ergonomics. New York: Wiley & Sons, Inc.; 1987.
24.
Croskerry P. The cognitive imperative: thinking about how we think. Acad Emerg Med. 2000;7:1223–31. [PubMed: 11073470]
25.
Klein G. Sources of power: how people make decisions. Cambridge, MA: The MIT Press; 1998.
26.
Tversky A, Kahneman D. Availability: a heuristic for judging frequency and probability. Cognitive Psychology. 1973;5:207–32.
27.
Torsvall L, Akerstedt T. A diurnal type scale: construction, consistency, and validation in shift work. Scand J Work Environ Health. 1980;6:283–90. [PubMed: 7195066]
28.
Horne JA, Ostberg O. A self-assessment questionnaire to determine morningness-eveningness in human circadian rhythms. International Journal of Chronobiology. 1976;4:97–110. [PubMed: 1027738]
29.
Lamberg L. Bodyrhythms: chronobiology and peak performance. New York: William Morrow and Co.; 1994.
30.
Kasper S, Wehr TA, Bartko JJ. et al. Epidemiological findings of seasonal changes in mood and behavior. Arch Gen Psychiatry. 1989;46:823–33. [PubMed: 2789026]
31.
Gordon N, Cleary PD, Parker CE. et al. The prevalence and health impact of shift work. Am J Public Health. 1986;76:1225–8. [PMC free article: PMC1646676] [PubMed: 3752325]
32.
Moore-Ede MC, Richardson GS. Medical implications for shift work. Ann Rev Med. 1985;36:607–7. [PubMed: 3888066]
33.
Damasio AR. Descartes' Error: Emotion, Reason and the Human Brain. New York: G. P. Putnam's Sons; 1994.
34.
Salovey P, Mayer J. D. Emotional intelligence. Imagination, Cognition, and Personality. 1990;9:185–211.
35.
Feinstein AR. The “chagrin factor” and qualitative decision analysis. Arch Intern Med. 1985;145:1257–9. [PubMed: 4015276]
36.
Baron J, Hershey JC. Outcome bias in decision evaluation. J Pers Soc Psychol. 1988;54:569–79. [PubMed: 3367280]
37.
Wallsten TS. Physician and medical student bias in evaluating diagnostic information. Medical decision making. 1981;1:145–64. [PubMed: 7052407]
38.
Poses RM, Cebul RD, Collins M. et al. The accuracy of experienced physicians' probability estimates for patients with sore throats. JAMA. 1985;254:925–9. [PubMed: 3894705]
39.
Glassman NS, Andersen SM. Transference in social cognition: persistence and exacerbation of significant-other-based inferences over time. Cognitive Therapy and Research. 1999;23:75–91.
40.
Groves JE. Taking care of the hateful patient. NEJM. 1978;298:883–7. [PubMed: 634331]
41.
Walling A, Montello M, Moser SE. et al. Which patients are most challenging for second-year medical students? Fam Med. 2004;36:710–4. [PubMed: 15531985]
42.
Raine R. Bias measuring bias. J Health Serv Res Policy. 2002;7:65–7. [PubMed: 11822263]
43.
Schulman KA, Berlin JA, Harless W. et al. The effect of race and sex on physician's recommendations for cardiac catheterization. NEJM. 1999;340:618–26. [PubMed: 10029647]
44.
Risser DT, Simon R, Rice MM, et al. A structured teamwork system to reduce clinical errors. In: Spath PL, editor. Error reduction in health care. San Francisco: Jossey-Bass; 1999. pp. 235–78.
45.
Vaughan D. The Challenger launch decision: risky technology, culture and deviance. Chicago: University of Chicago Press; 1996.
46.
Janis IL, Mann L. Decision making: a psychological analysis of conflict, choice, and commitment. New York: Free Press; 1977.
47.
Williams JC. Assessing and reducing the likelihood of violation behaviour—a preliminary investigation. Proceedings of an International Conference on the Commercial & Operational Benefits of Probabilistic Safety Assessments. Institute of Nuclear Engineers. October 1997; Edinburgh, Scotland.
48.
Reason J. Managing the risks of organizational accidents. Brookfield, VT: Ashgate Publishing; 1997.
49.
Croskerry P, Wears RL. Safety errors in emergency medicine. In: Markovchick VJ, Pons PT, editors. Emergency medicine secrets, 3rd ed. Philadelphia: Hanley and Belfus; 2003.
50.
Byrnes JP, Miller DC, Schafer WD. Gender differences in risk-taking: a meta-analysis. Psychological Bulletin. 1999;125:367–83.
51.
Pearson SD, Goldman L, Orav EJ. et al. Triage decisions for emergency department patients with chest pain: do physicians' risk attitudes make the difference? J Gen Intern Med. 1995;10:557–64. [PubMed: 8576772]
52.
Cosby KS, Croskerry P. Authority gradients in medical error. Acad Emerg Med. 2004;11:1341–45. [PubMed: 15576526]
53.
Croskerry P. When diagnoses fail: new insights, old thinking. Can J CME 2003 Nov:51–7.
54.
Bonnet MH. Sleep deprivation. In: Kryger M, Roth T, Dement WC, editors. Principles and practice of sleep medicine. Philadelphia: Saunders; 2000. pp. 53–71.
55.
Dawson D, Reid K. Fatigue, alcohol, and performance impairment. Nature. 1997;388:235. [PubMed: 9230429]
56.
Landrigan CP, Rothschild JM, Cronin JW. et al. Effect of reducing interns' work hours on serious medical errors in intensive care units. NEJM. 2004;351:1838–48. [PubMed: 15509817]
57.
Oginska EI, Pokorski J, Oginska A. Gender, ageing, and shiftwork tolerance. Ergonomics. 1993;361(13):161–8. [PubMed: 8440214]
58.
Croskerry P. Cognitive forcing strategies in clinical decision making. Annals Emerg Med. 2003;41:110–20. [PubMed: 12514691]
59.
Keren G. Cognitive aids and debiasing methods: can cognitive pills cure cognitive ills? In: Caverni JP, Fabre JM, Gonzales M, editors. Cognitive biases. New York: Elsevier; 1990. pp. 523–52.
60.
Gleitman H. Some trends in the study of cognition. In: Koch S, Leary DE, editors. A century of psychology as a science. New York: McGraw-Hill; 1985.
61.
Wu AW. Medical error: the second victim. BMJ. 2000;7237:726–7. [PMC free article: PMC1117748] [PubMed: 10720336]
62.
Chisholm CD, Croskerry P. A case study in medical error: the use of the portfolio entry. Acad Emerg Med. 2004;11:388–92. [PubMed: 15064214]
63.
Croskerry P. Achilles heels of the ED: delayed or missed diagnoses. ED Legal Letter. 2003;14:109–20.
64.
Wachter RM, Shojania KG. Internal bleeding: the truth behind America's terrifying epidemic of medical mistakes. New York: Rugged Land; 2004.
65.
Pierluissi E, Fischer MA, Campbell AR. et al. Are medical errors discussed in morbidity and mortality conferences? JAMA. 2003;290:2838–42. [PubMed: 14657068]
66.
Croskerry P. Cognitive forcing strategies in emergency medicine. Emerg Med J. 2002;19(Suppl 1):A9. [PubMed: 12514691]
67.
Bond WF, Deitrick LM, Arnold DC. et al. Using simulation to instruct emergency medicine residents in cognitive forcing strategies. Acad Med. 2004;79:438–46. [PubMed: 15107283]

Footnotes

5

Literature is available upon request from the author. See “Author affiliations.”

6

The term is taken from the dialogue in the closing scene of the movie “Casablanca,” when the chief of police gives an order to “round up the usual suspects.” It was first used by the author and staff at the Emergency Department of Dartmouth General Hospital to describe the bloodwork required for various work-ups: Casablanca 1 referred to a small panel, and Casablanca 2 to a large panel of usual suspects.

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this page (226K)

Other titles in this collection

Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to PubMed

Similar articles in PubMed

See reviews...See all...

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...