U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Henriksen K, Battles JB, Marks ES, et al., editors. Advances in Patient Safety: From Research to Implementation (Volume 2: Concepts and Methodology). Rockville (MD): Agency for Healthcare Research and Quality (US); 2005 Feb.

Cover of Advances in Patient Safety: From Research to Implementation (Volume 2: Concepts and Methodology)

Advances in Patient Safety: From Research to Implementation (Volume 2: Concepts and Methodology).

Show details

Diagnosing Diagnosis Errors: Lessons from a Multi-institutional Collaborative Project

, , , , , , , , , , and .

Author Information and Affiliations

Abstract

Background: Diagnosis errors are frequent and important, but represent an underemphasized and understudied area of patient safety. Diagnosis errors are challenging to detect and dissect. It is often difficult to agree whether an error has occurred, and even harder to determine with certainty its causes and consequence. The authors applied four safety paradigms: (1) diagnosis as part of a system, (2) less reliance on human memory, (3) need to open “breathing space” to reflect and discuss, (4) multidisciplinary perspectives and collaboration. Methods: The authors reviewed literature on diagnosis errors and developed a taxonomy delineating stages in the diagnostic process: (1) access and presentation, (2) history taking/collection, (3) the physical exam, (4) testing, (5) assessment, (6) referral, and (7) followup. The taxonomy identifies where in the diagnostic process the failures occur. The authors used this approach to analyze diagnosis errors collected over a 3-year period of weekly case conferences and by a survey of physicians. Results: The authors summarize challenges encountered from their review of diagnosis error cases, presenting lessons learned using four prototypical cases. A recurring issue is the sorting-out of relationships among errors in the diagnostic process, delay and misdiagnosis, and adverse patient outcomes. To help understand these relationships, the authors present a model that identifies four key challenges in assessing potential diagnosis error cases: (1) uncertainties about diagnosis and findings, (2) the relationship between diagnosis failure and adverse outcomes, (3) challenges in reconstructing clinician assessment of the patient and clinician actions, and (4) global assessment of improvement opportunities. Conclusions and recommendations: Finally the authors catalogue a series of ideas for change. These include: reengineering followup of abnormal test results; standardizing protocols for reading x-rays/lab tests, particularly in training programs and after hours; identifying “red flag” and “don't miss” diagnoses and situations and use of manual and automated check-lists; engaging patients on multiple levels to become “coproducers” of safer medical diagnosis practices; and weaving “safety nets” to mitigate harm from uncertainties and errors in diagnosis. These change ideas need to be tested and implemented for more timely and error-free diagnoses.

Introduction

Diagnosis errors are frequent and important, but represent an underemphasized and understudied area of patient-safety. 1 8 This belief led us to embark on a 3-year project, funded by the Agency for Healthcare Research and Quality (AHRQ), to better understand where and how diagnosis fails and explore ways to target interventions that might prevent such failures. It is known that diagnosis errors are common and underemphasized, but they are also challenging to detect and dissect. It is often difficult even to agree whether or not a diagnosis error has occurred.

In this article we describe how we have applied patient safety paradigms (blame-free reporting/reviewing/learning, attention to process and systems, an emphasis on communication and information technology) to better understand diagnosis error. 2, 7 9

We review evidence about the types and importance of diagnosis errors and summarize challenges we have encountered in our review of more than 300 cases of diagnosis error. In the second half of the article, we present lessons learned through analysis of four prototypical cases. We conclude with suggested “change ideas”—interventions for improvement, testing, and future research.

Although much of the patient safety spotlight has focused on medication errors, two recent studies of malpractice claims revealed that diagnosis errors far outnumber medication errors as a cause of claims lodged (26 percent versus 12 percent in one study; 10 32 percent versus 8 percent in another study 11 ). A Harris poll commissioned by the National Patient Safety Foundation found that one in six people had personally experienced a medical error related to misdiagnosis. 12 Most medical error studies find that 10–30 percent (range = 0.6–56.8 percent) of errors are errors in diagnosis (Table 1). 1 3, 5, 11, 13 21 A recent review of 53 autopsy studies found an average rate of 23.5 percent major missed diagnoses (range = 4.1–49.8 percent). Selected disease-specific studies (Table 2), 6, 22 32 also show that substantial percentages of patients (range = 2.1 – 61 percent) experienced missed or delayed diagnoses. Thus, while these studies view the problem from varying vantage points using heterogeneous methodologies (some nonsystematic and lacking in standardized definitions), what emerges is compelling evidence for the frequency and impact of diagnosis error and delay.

Table 1. General medical error studies that reported errors in diagnosis.

Table 1

General medical error studies that reported errors in diagnosis.

Table 2. Illustrative disease-specific studies of diagnosis errors.

Table 2

Illustrative disease-specific studies of diagnosis errors.

Of the 93 safety projects funded by AHRQ, only 1 is focused on diagnosis error, and none of the 20 evidence-based AHRQ Patient Safety Indicators directly measures failure to diagnose. 33 Nonetheless, for each of AHRQ's 26 “sentinel complications” (e.g., decubitus ulcer, iatrogenic pneumothorax, postoperative septicemia, accidental puncture/laceration), timely diagnosis can be decisive in determining whether patients experience major adverse outcomes. Hence, while diagnosis error remains more in the shadows than in the spotlight of patient safety, this aspect of clinical medicine is clearly vulnerable to well-documented failures and warrants an examination through the lens of modern patient safety and quality improvement principles.

Traditional and innovative approaches to learning from diagnosis error

Traditionally, studying missed diagnoses or incorrect diagnoses had a central role in medical education, research, and quality assurance in the form of autopsies. 34 36 Other traditional methods of learning about misdiagnosed cases include malpractice litigation, morbidity and mortality (M&M) conferences, unsystematic feedback from patients, other providers, or simply from patients' illnesses as they evolved over time. 3, 14, 15, 32 Beyond the negative aspects of being grounded in patients' deaths or malpractice accusations, there are other limitations of these historical approaches, including

  • Lack of systematic approaches to surveillance, reporting, and learning from errors, with nonrandom sample of cases subjected to such review 37, 39
  • Lack of timeliness, with cases often reviewed months or years after the event 38
  • Examinations that rarely dig to the root of problems: not focused on the “Five Whys” 40
  • Postmortems that seldom go beyond the case-at-hand, with minimal linkages to formal quality improvement activities 41
  • Atrophy of the value of even these suboptimal approaches, with autopsy rates in the single digits (in many hospitals, zero), many malpractice experiences sealed by nondisclosure agreements, and shorter hospitalizations limiting opportunities for followup to ultimate diagnosis 34, 41, 42

What is needed to overcome these limitations is not only a more systematic method for examining cases of diagnosis failure, but also a fresh approach. Therefore, our team approached diagnosis error with the following perspectives:

Diagnosis as part of a system. Diagnostic accuracy should be viewed as a system property rather than simply what happens between the doctor's two ears. 2, 43 45 While cognitive issues figure heavily in the diagnostic process, a quite from Don Berwick summarizes 46 a much lacking and needed perspective: “Genius diagnosticians make great stories, but they don't make great health care. The idea is to make accuracy reliable, not heroic.”

Less reliance on human memory. Relying on clinicians' memory—to trigger consideration of a particular diagnosis, recall a disease's signs/symptoms/pattern from a textbook or experience—or simply to remember to check on a patient's lab result—is an invitation to variations and failures. This lesson from other error research resonates powerfully with clinicians, who are losing the battle to keep up to date. 9, 45, 47

Need for “space” to allow open reflection and discussion. Transforming an adversarial atmosphere into one conducive to honest reflection is an essential first step. 48, 49 However, an equally important and difficult challenge is creating venues that allow clinicians (and patients) to discuss concerns in an efficient and productive manner. 37 Cases need to be reviewed in sufficient detail to make them “real.” Firsthand clinical information often radically changes our understanding from what the more superficial “first story” suggested. As complex clinical circumstances are better understood, new light is often shed on what at first appeared to be indefensible diagnostic decisions and actions. Unsuspected additional errors also emerge. Equally important is not to get mired in details or making judgments (whether to label a case as a diagnosis error). Instead, it is more valuable to focus on generalizable lessons of how to ensure better treatment of similar future patients. 16

Adopting multidisciplinary perspectives and collaboration. A broad range of skills and vantage points are valuable in understanding the complex diagnostic problems that we encountered. We considered input from specialists and primary care physicians to be essential. In addition, specialists in emergency medicine (where many patients first present) offered a vital perspective, both for their diagnostic expertise and their pivotal interface with system constraints (resource limits mean that not every patient with a confusing diagnosis can be hospitalized). Even more valuable has been the role of non-MDs, including nursing quality specialists, information scientists, and social scientists (cognitive psychologist, decision theory specialist) in forging a team to broadly examine diagnosis errors.

Innovative screening approaches. Developing new ways to uncover errors is a priority. We cannot afford to wait for a death, lawsuit, or manual review. Approaches we have been exploring include electronic screening that links pharmacy and lab data (e.g., to screen for abnormal results, such as elevated thyroid stimulating hormone [ TSH], unaddressed by thyroxin therapy), trajectory studies (retrospectively probing delays in a series of cases with a particular diagnosis), and screening for discrepancies between admitting and discharge diagnoses. A related approach is to survey specialists (who are poised to see diagnoses missed in referred patients), primary care physicians (about their own missed diagnoses), or patients themselves (who frequently have stories to share about incorrect diagnoses), in addition to various ad hoc queries and self-reports.

Where does the diagnostic process fail?

One of the most powerful heuristics in medication safety has been delineation of the steps in the medication-use process (prescribing, transcribing, dispensing, administering, and monitoring) to help localize where an error has occurred. Diagnosis, while more difficult to neatly classify (because compared to medications, stages are more concurrent, recurrent, and complex), nonetheless can be divided into seven stages: (1) access/presentation, (2) history taking/collection, (3) the physical exam, (4) testing, (5) assessment, (6) referral, and (7) followup. We have found this framework helpful for organizing discussions, aggregating cases, and targeting areas for improvement and research. It identifies what went wrong, and situates where in the diagnostic process the failure occurred (Table 3). We have used it for a preliminary analysis of several hundred diagnosis error cases we collected by surveying physicians.

Table 3. Taxonomy of where and what errors occurred.

Table 3

Taxonomy of where and what errors occurred.

This taxonomy for categorizing diagnostic “assessment” draws on work of Kassirer and others, 53 highlighting the two key steps of (a) hypothesis generation, and (b) differential diagnosis or hypothesis weighing/prioritization. We add another aspect of diagnostic assessment, one that connects to other medical and iatrogenic error work—the need to recognize the urgency of diagnoses and complications. This addition underscores the fact that failure to make the exact diagnosis is often less important than correctly assessing the urgency of the patient's illness. We divide the “testing” stage into three components—ordering, performing, and clinician processing (similar but not identical to the laboratory literature classification of the phases of lab testing as preanalytic, analytic, and

postanalytic). 50, 51 For each broad category, we specified the types of problems we observed.

A recurring theme running through our reviews of potential diagnosis error cases pertains to the relationship between errors in the diagnostic process, delay and misdiagnosis, and adverse patient outcomes. Bates 52 has promulgated a useful model for depicting the relationships between medication errors and outcomes. Similarly, we find that most errors in the diagnostic process do not adversely impact patient outcomes. And, many adverse outcomes associated with misdiagnosis or delay do not necessarily result from any error in the diagnostic process—the cancer may simply be undiagnosable at that stage, the illness presentation too atypical, rare, or unlikely even for the best of clinicians to diagnose early. These situations are often referred to as “no fault” or “forgivable” errors—terms best avoided because they imply fault or blame for preventable errors (Figure 1). 7, 53

Figure 1. Relationships between diagnostic process errors, misdiagnosis, and adverse events.

Figure 1

Relationships between diagnostic process errors, misdiagnosis, and adverse events.

While deceptively simple, the model raises a series of extremely challenging questions—questions we found ourselves repeatedly returning to in our weekly discussions. We hope these questions can provide insights into recurring themes and challenges we faced, and perhaps even serve as a checklist for others to structure their own patient care reviews. While humbled by our own inability to provide more conclusive answers to these questions, we believe researchers and practitioners will be forced to grapple with them before we can make significant progress.

Questions for consideration by diagnosis error evaluation and research (DEER) investigators in assessing cases

Uncertainties about diagnosis and findings

1.

What is the correct diagnosis? How much certainty do we have, even now, about what the correct diagnosis is?

2.

What were the findings at the various points in time when the patient was being seen; how much certainty do we have that a particular finding and diagnosis was actually present at the time(s) we are positing an error?

Relationship between diagnosis failure and adverse outcomes

1.

What is the probability that the error resulted in the adverse outcome? How treatable is the condition, and how critical is timely diagnosis and treatment for impacting on the outcome—both in general and in this case?

2.

How did the error in the diagnostic process contribute to making the wrong diagnosis and wrong treatment?

Clinician assessment and actions

1.

What was the physican's diagnostic assessment? How much consideration was given to the correct diagnosis? (This is usually difficult to reconstruct because differential diagnosis often is not well documented.)

2.

How good or bad was the diagnostic assessment based on evidence clinicians had on hand at that time (should have been obvious from available data vs. no way anyone could have suspected)?

3.

How erroneous was the diagnostic assessment, based on the difficulty in making the diagnosis at this point? (Was there a difficult “signal-to-noise” situation, a rare low-probability diagnosis, or an atypical presentation?)

4.

How justifiable was the failure to obtain additional information (i.e., history, tests) at a particular point in time? How can this be analyzed absolutely, as well as relative to the difficulties and constraints in obtaining this missing data? (Did the patient withhold or refuse to give accurate/additional history; were there backlogs and delays that made it impossible to obtain the desired test?)

5.

Was there a problem in diagnostic assessment of the severity of the illness, with resulting failure to observe or follow up the patient more closely? (Again, both absolutely and relative to constraints.)

Global assessment of improvement opportunities

1.

To what extent did the clinicians' actions deviate from the standard-of-care (i.e., was there negligent care with failure to follow accepted diagnostic guidelines and expected practices, or to pursue abnormal finding that should never be ignored)?

2.

How preventable was the error? How ameliorable or amenable to change are the factors/problems that contributed to the error? How much would such changes, designed to prevent this error in the future, cost?

3.

What should we do better the next time we encounter a similar patient or situation? Is there a general rule, or are there measures that can be implemented to ensure this is reliably done each time?

Diagnosis error case vignettes

Case 1

A 25-year-old woman presents with crampy abdominal pain, vaginal bleeding, and amenorrhea for 6 weeks. Her serum human choriogonadotropin (HCG) level is markedly elevated. A pelvic ultrasound is read by the on-call radiology chief resident and obstetrics (OB) attending physician as showing an empty uterus, suggesting ectopic pregnancy. The patient is informed of the findings and treated with methotrexate. The following morning the radiology attending reviews the ultrasound and amends the report, officially reading it as “normal intrauterine pregnancy.”

Case 2

A 49-year-old, previously healthy man presents to the emergency department (ED) with nonproductive cough, “chest congestion,” and dyspnea lasting 2 weeks; he has a history of smoking. The patient is afebrile, with pulse = 105, respiration rate (RR) = 22, and white blood count (WBC) = 6.4. Chest x-ray shows “marked cardiomegaly, diffuse interstitial and reticulonodular densities with blunting of the right costophrenic angle; impression—congestive heart failure (CHF)/pneumonia. Rule out (R/O) cardiomyopathy, valve disease or pericardial effusion.” The patient is sent home with the diagnosis of pneumonia, with an oral antibiotic.

The patient returns 1 week later with worsening symptoms. He is found to have pulsus paradoxicus, and an emergency echocardiogram shows massive pericardial effusion. Pericardiocentesis obtains 350 cc fluid with cytology positive for adenocarcinoma. Computed tomography of the chest suggests “lymphangitic carcinomatosis.”

Case 3

A 50-year-old woman with frequent ED visits for asthma (four visits in the preceding month) presents to the ED with a chief complaint of dyspnea and new back pain. She is treated for asthma exacerbation and discharged with nonsteroidal anti-inflammatory drugs (NSAID) for back pain.

She returns 2 days later with acutely worsening back pain, which started when reaching for something in her cupboard. A chest x-ray shows a “tortuous and slightly ectatic aorta,” and the radiologist's impression concludes, “If aortic dissection is suspected, further evaluation with chest CT with intravenous (IV) contrast is recommended.” The ED resident proceeds to order a chest CT, which concludes “no evidence of aneurysm or dissection.” The patient is discharged.

She returns to the ED 3 days later, again complaining of worsening asthma and back pain. While waiting to be seen, she collapses in the waiting room and is unable to be resuscitated. Autopsy shows a ruptured aneurysm of the ascending aorta.

Case 4

A 50-year-old woman with a past history diabetes and alcohol and IV drug abuse, presents with symptoms of abdominal pain and vomiting and is diagnosed as having “acute chronic pancreatitis.” Her amylase and lipase levels are normal. She is admitted and treated with IV fluids and analgesics. On hospital day 2 she begins having spiking fevers and antibiotics are administered. The next day, blood cultures are growing gram negative organisms.

At this point, the service is clueless about the patient's correct diagnosis. It only becomes evident the following day when (a) review of laboratory data over the past year shows that patient had four prior blood cultures, each positive with different gram negative organisms; (b) a nurse reports patient was “behaving suspiciously,” rummaging through the supply room where syringes were kept; and (c) a medical student looks up posthospital outpatient records from 4 months earlier and finds several notes stating that “the patient has probable Munchausen syndrome rather than pancreatitis.” Upon discovering these findings, the patient's IVs are discontinued and sensitive, appropriate followup primary and psychiatric care are arranged.

A postscript to this admission: 3 months later, the patient was again readmitted to the same hospital for “pancreatitis” and an unusual “massive leg abscess.” The physicians caring for her were unaware of her past diagnoses and never suspected or discovered the likely etiology of her abscess (self-induced from unsterile injections).

Lessons and issues raised by the diagnosis error cases

Difficulties in sorting out “don't miss” diagnoses

Before starting our project, we compiled a list of “don't miss” diagnoses (available from the authors). These are diagnoses that are considered critical, but often difficult to make—critical because timely diagnosis and treatment can have major impact (for the patient or the public's health, or both), and difficult because they either are rare or pose diagnostic challenges. Diagnoses such as spinal epidural abscess (where paraplegia can result from delayed diagnosis), or active pulmonary tuberculosis (TB) (where preventing spread of infection is critical) are examples of “don't miss” diagnoses. While there is a scant evidence base to definitively compile and prioritize such a list, three of our cases—ectopic pregnancy, dissecting aortic aneurysm, and pericardial effusion with tamponade—are diagnoses that would unquestionably be considered as life-threatening diagnoses that ought not be delayed or missed. 25

Although numerous issues concerning diagnosis error are raised by these cases, they also illustrate problems relating to uncertainties, lack of gold standards (for both testing and standard of care), and difficulties reaching consensus about best ways to prevent future errors and harmful delays. Below we briefly discuss some of these issues and controversies.

Diagnostic criteria and strategies for diagnosing ectopic pregnancy are controversial, and our patient's findings were particularly confusing. Even after careful review of all aspects of the case, we were still not certain who was “right”—the physicians who read the initial images and interpreted them as consistent with ectopic pregnancy, or the attending physician rereading the films the next day as normal. The literature is unclear about criteria for establishing this diagnosis. 54 57 In addition, there is a lack of standards in performance and interpretation of ultrasound exams, plus controversies about timing of interventions. Thus this “obvious” error is obviously more complex, highlighting a problem-prone clinical situation.

The patient who was found to have a malignant pericardial effusion illustrates problems in determining the appropriate course of action for patients with unexplained cardiomegaly, which the ED physicians failed to address on his first presentation: Was he hemodynamically stable at this time? If so, did he require an urgent echocardiogram? What criteria should have mandated this be done immediately? How did his cardiomegaly get “lost” as his diagnosis prematurely “closed” on pneumonia when endorsed from the day to the night ED shift? How should one assess the empiric treatment for pneumonia given his abnormal chest x-ray? Was this patient with metastatic lung cancer harmed by the 1 week delay?

The diagnosis of aneurysms (e.g., aortic, intracranial) arises repeatedly in discussions of misdiagnosis. Every physician seems to recall a case of a missed aneurysm with catastrophic outcomes where, in retrospect, warnings may have been overlooked. A Wall Street Journal article recently won a Pulitzer Prize for publicizing such aneurysm cases. 58 Our patient's back pain was initially dismissed. Because of frequent visits, she had been labeled as a “frequent flyer”—and back pain is an extremely common and nonspecific symptom. A review of literature on the frequency of dissecting aortic aneurysm reveals that it is surprisingly rare, perhaps less than 1 out of 50,000 ED visits for chest pain, and likely an equally rare cause for back pain. 59 62 She did ultimately undergo a recommended imaging study after a suspicious plain chest x-ray, however it was deemed “negative.”

Thus, in each case, seemingly egregious and unequivocal errors were found to be more complex and uncertain.

Issues related to limitations of diagnostic testing

Even during our 3 years of diagnosis case reviews, clinicians have been confronted with rapid changes in diagnostic testing. New imaging modalities, lab tests, and testing recommendations have been introduced, often leaving clinicians confused about which tests to order and how to interpret their confusing and, at times, contradictory (from one radiologist to the next) results. 63

If diagnosis errors are to be avoided, clinicians must be aware of the limitations of the diagnostic tests they are using. It is well known that a normal mammogram in a woman with a breast lump does not rule out the diagnosis of breast cancer, because the sensitivity of test is only 70 to 85 percent. 13, 26, 64, 65 A recurring theme in our cases is failure to appreciate pitfalls in weighing test results in the context of the patient's pretest disease probabilities. Local factors, such as the variation in quality of test performance and readings, combined with communication failures between radiology/laboratory and ordering physicians (either no direct communication or interactions where complex interpretations get reduced to “positive” or “negative,” overlooking subtleties and limitations) provide further sources of error. 66

The woman with the suspected ectopic pregnancy, whose emergency ultrasound was initially interpreted as being “positive,” illustrates the pitfalls of taking irreversible therapeutic actions without carefully weighing test reading limitations. Perhaps an impending rupture of an ectopic pregnancy warrants urgent action. However, it is also imperative for decisions of this sort that institutions have fail-safe protocols that anticipate such emergencies and associated test limitations.

The patient with a dissecting aortic aneurysm clearly had a missed diagnosis, as confirmed by autopsy. This diagnosis was suspected premortem, but considered to be “ruled out” by a CT scan that did not show a dissection. Studies of the role of chest CT, particularly when earlier CT scanning technology was used, show sensitivity for dissecting aneurysm of only 83 percent. 67 When we reexamined the old films for this patient, several radiologists questioned the adequacy of the study (quality of the infusion plus question of motion artifact). Newer, faster scanners reportedly are less prone to these errors, but the experience is variable. We identified another patient where a “known” artifact on a spiral CT nearly led to an unnecessary aneurysm surgery; it was only prevented by the fact that she was a Jehovah's Witness and, because her religious beliefs precluded transfusions, surgery was considered too risky. 62

The role of information transfer and the communication of critical laboratory information

Failure of diagnosis because of missing information is another theme in our weekly case reviews and the medical literature. Critical information can be missed because of failures in history-taking, lack of access to medical records, failures in the transmission of diagnostic test results, or faulty records organization (either paper or electronic) that created problems for quickly reviewing or finding needed information.

For the patient with the self-induced illness, all of the “missing” information was available online. Ironically, although many patients with a diagnosis of Munchausen's often go to great lengths to conceal information (i.e., giving false names, using multiple hospitals), in our case, there was so much data in the computer from previous admissions and outpatient visits that the condition was “lost” in a sea of information overload—a problem certain to grow as more and more clinical information is stored online. While this patient is an unusual example of the general problems related to information transfer, this case illustrates important principles related to the need for conscientious review, the synthesizing of information, and continuity (both of physicians and information) to avoid errors.

Simply creating and maintaining a patient problem list can help prevent diagnosis errors. It can ensure that each active problem is being addressed, helping all caregivers to be aware of diagnoses, allergies, and unexplained findings. Had our patient with “unexplained cardiomegaly” been discharged listing this as one of his problems, instead of only “pneumonia,” perhaps this problem would not have been overlooked. However, making this seemingly simple documentation tool operational has been unsuccessful in most institutions, even ones with advanced electronic information systems, and thus represents a challenge as much as a panacea.

One area of information transfer, the followup of abnormal laboratory test results, represents an important example of this information transfer paradigm in diagnostic patient safety. 68 73 We identified failure rates of more than 1 in 50 for both followup abnormal thyroid tests (where the diagnosis of hypothyroidism was missed in 23 out of 982, or 2.3 percent, of patients with markedly elevated TSH results), and our earlier study on failure to act on elevated potassium levels (674 out of 32,563, or 2.0 percent, of potassium prescriptions were written for hyperkalemic patients). 74 Issues of communication, teamwork, systems design, and information technology stand as areas for improvement. 75 77 Recognizing this, the Massachusetts Safety Coalition has launched a statewide initiative on Communicating Critical Test Results. 78

Physician time, test availability, and other system constraints

Our project was based in two busy urban hospitals, including a public hospital with serious constraints on bed availability and access to certain diagnostic tests. An important recurring theme in our case discussions (and in health care generally) is the interaction between diagnostic imperatives and these resource limitations.

To what extent is failure to obtain an echocardiogram, or even a more thorough history or physical exam, understandable and justified by the circumstances under which physicians find themselves practicing? Certainly our patient with the massive cardiomegaly needed an echocardiogram at some time. Was it reasonable for the ED to defer the test (meaning a wait of perhaps several months in the clinic), or would a more “just-in-time” approach be more efficient, as well as safer in minimizing diagnosis error and delay? 79 Since, by definition, we expect ED physicians to triage and treat emergencies, not thoroughly work up every problem patients have, we find complex trade-offs operating at multiple levels.

Similar trade-offs impact whether a physician had time to review all of the past records of our factitious-illness patient (only the medical student did), or how much radiology expertise is available around-the-clock to read ultrasound or CT exams, to diagnose ectopic pregnancy or aortic dissection. This is perhaps the most profound and poorly explored aspect of diagnosis error and delay, but one that will increasingly be front-and-center in health care.

Cognitive issues in diagnosis error

We briefly conclude where most diagnosis error discussions begin, with cognitive errors. 45, 53, 80 84

Hindsight bias and the difficulty of weighing prior probabilities of the possible diagnoses bedeviled our efforts to assess decisions and actions retrospectively. Many “don't miss” diagnoses are rare; it would be an error to pursue each one for every patient. We struggled to delineate guidelines that would accurately identify high-risk patients and to design strategies to prevent missing these diagnoses.

Our case reviews and firsthand interviews often found that each physician had his or her own individual way of approaching patients and their problems. Such differences made for lively conference discussions, but have disturbing implications for developing more standardized approaches to diagnosis.

The putative dichotomy between “cognitive” and “process” errors is in many ways an artificial distinction. 7, 8 If a physician is interrupted while talking to the patient or thinking about a diagnosis and forgets to ask a critical question or consider a critical diagnosis, is this a process or cognitive error?

Conclusion

Because of their complexity, there are no quick fixes for diagnosis errors. As we review what we learned from a variety of approaches and cases, certain areas stood out as ripe for improvement—both small-scale improvements that can be tested locally, as well as larger improvements that need more rigorous, formal research. Table 4 summarizes these change ideas, which harvest the lessons of our 3-year project.

Table 4. Change ideas for preventing and minimizing diagnostic error.

Table 4

Change ideas for preventing and minimizing diagnostic error.

As outlined in the table, there needs to be a commitment to build learning organizations, in which feedback to earlier providers who may have failed to make a correct diagnosis becomes routine, so that institutions can learn from this aggregated feedback data. To better protect patients, we will need to conceptualize and construct safety nets to mitigate harm from uncertainties and errors in diagnosis. The followup of abnormal test results is a prime candidate for reengineering, to ensure low “defect” rates that are comparable to those achieved in other fields. More standardized and reliable protocols for reading x-rays and laboratory tests (such as pathology specimens), particularly in residency training programs and “after hours,” could minimize the errors we observed. In addition, we need to better delineate “red flag” and “don't miss” diagnoses and situations, based on better understanding and data regarding pitfalls in diagnosis and ways to avoid them.

To achieve many of these advances, automated (and manual) checklists and reminders will be needed to overcome current reliance on human memory. But information technology must also be deployed and reengineered to overcome growing problems associated with information overload. Finally, and most importantly, patients will have to be engaged on multiple levels to become “coproducers” in a safer practice of medical diagnosis. 85 It is our hope that these change ideas can be tested and implemented to ensure safer treatment based on better diagnoses—diagnosis with fewer delays, mistakes, and process errors.

Acknowledgments

This work was supported by AHRQ Patient Safety Grant #11552, the Cook County-Rush Developmental Center for Research in Patient Safety (DCERPS) Diagnostic Error Evaluation and Research (DEER) Project.

References

1.
Baker GR, Norton P. Patient safety and healthcare error in the Canadian healthcare system. Ottawa, Canada: Health Canada; 2002. pp. 1–167. (http://www​.hc-sc.gc.ca​/english/care/report. Accessed 12/24/04.)
2.
Leape L, Brennan T, Laird N. et al. The nature of adverse events in hospitalized patients: Results of the Harvard medical practice study II. NEJM. 1991;324:377–84. [PubMed: 1824793]
3.
Medical Malpractice Lawyers and Attorneys Online. Failure to diagnose. http://www​.medical-malpractice-attorneys-lawsuits​.com/pages/failure-to-diagnose​.html. 2004.
4.
Ramsay A. Errors in histopathology reporting: detection and avoidance. Histopathology. 1999;34:481–90. [PubMed: 10383691]
5.
JCAHO Sentinel event alert advisory group. Sentinel event alert. http://www​.jcaho.org​/about+us/news+letters​/sentinel+event+alert/sea_26.htm. Jun 17, 2002. [PubMed: 12092445]
6.
Edelman D. Outpatient diagnostic errors: unrecognized hyperglycemia. Eff Clin Pract. 2002;5:11–6. [PubMed: 11874191]
7.
Graber M, Gordon R, Franklin N. Reducing diagnostic errors in medicine: what's the goal? Acad Med. 2002;77:981–92. [PubMed: 12377672]
8.
Kuhn G. Diagnostic errors. Acad Emerg Med. 2002;9:740–50. [PubMed: 12093717]
9.
Kohn LT, Corrigan JM, Donaldson MS, editors. To err is human: building a safer health system. A report of the Committee on Quality of Health Care in America, Institute of Medicine. Washington, DC: National Academy Press; 2000. [PubMed: 25077248]
10.
Sato L. Evidence-based patient safety and risk management technology. J Qual Improv. 2001;27:435.
11.
Phillips R, Bartholomew L, Dovey S. et al. Learning from malpractice claims about negligent, adverse events in primary care in the United States. Qual Saf Health Care. 2004;13:121–6. [PMC free article: PMC1743812] [PubMed: 15069219]
12.
Golodner L. How the public perceives patient safety. Newsletter of the National Patient Safety Foundation. 2004;1997:1–6.
13.
Bhasale A, Miller G, Reid S. et al. Analyzing potential harm in Australian general practice: an incident-monitoring study. Med J Aust. 1998;169:73–9. [PubMed: 9700340]
14.
Kravitz R, Rolph J, McGuigan K. Malpractice claims data as a quality improvement tool, I: Epidemiology of error in four specialties. JAMA. 1991;266:2087–92. [PubMed: 1920696]
15.
Flannery F. Utilizing malpractice claims data for quality improvement. Legal Medicine. 1992;92:1–3.
16.
Neale G, Woloshynowych M, Vincent C. Exploring the causes of adverse events in NHS hospital practice. J Royal Soc Med. 2001;94:322–30. [PMC free article: PMC1281594] [PubMed: 11418700]
17.
Bogner MS, editor. Human error in medicine. Hillsdale, NJ: Lawrence Erlbaum Assoc.; 1994.
18.
Makeham M, Dovey S, County M. et al. An international taxonomy for errors in general practice: a pilot study. Med J Aust. 2002;177:68–72. [PubMed: 12098341]
19.
Wilson R, Harrison B, Gibberd R. et al. An analysis of the causes of adverse events from the quality in Australian health care study. Med J Aust. 1999;170:411–5. [PubMed: 10341771]
20.
Weingart S, Ship A, Aronson M. Confidential clinician-reported surveillance of adverse events among medical inpatients. J Gen Intern Med. 2000;15:470–7. [PMC free article: PMC1495482] [PubMed: 10940133]
21.
Chaudhry S, Olofinboda K, Krumholz H. Detection of errors by attending physicians on a general medicine service. J Gen Intern Med. 2003;18:595–600. [PMC free article: PMC1494901] [PubMed: 12911640]
22.
Kowalski R, Claassen J, Kreiter K. et al. Initial misdiagnosis and outcome after subarachnoid hemorrhage. JAMA. 2004;291:866–9. [PubMed: 14970066]
23.
Mayer P, Awad I, Todor R. et al. Misdiagnosis of symptomatic cerebral aneurysm. Prevalence and correlation with outcome at four institutions. Stroke. 1996;27:1558–63. [PubMed: 8784130]
24.
Craven ER. Risk management issues in glaucoma diagnosis and treatment. Surv Opthalmol. 1996 May-Jun;40(6):459–62. [PubMed: 8724638]
25.
Lederle F, Parenti C, Chute E. Ruptured abdominal aortic aneurysm: the internist as diagnostician. Am J Med. 1994;96:163–7. [PubMed: 8109601]
26.
Goodson W, Moore D. Causes of physician delay in the diagnosis of breast cancer. Arch Intern Med. 2002;162:1343–8. [PubMed: 12076232]
27.
Clark S. Spinal infections go undetected. Lancet. 1998;351:1108.
28.
Williams VJ. Second-opinion consultations assist community physicians in diagnosing brain and spinal cord biopsies. http://www3​.mdanderson​.org/~oncolog/brunder.html. Oncol 1997. 11-29-0001.
29.
Arbiser Z, Folpe A, Weiss S. Consultative (expert) second opinions in soft tissue pathology. Amer J Clin Pathol. 2001;116:473–6. [PubMed: 11601130]
30.
Pope J, Aufderheide T, Ruthazer R. et al. Missed diagnoses of acute cardiac ischemia in the emergency department. NEJM. 2000;342:1163–70. [PubMed: 10770981]
31.
Steere A, Taylor E, McHugh G. et al. The over-diagnosis of lyme disease. JAMA. 1993;269:1812–6. [PubMed: 8459513]
32.
American Society for Gastrointestinal Endoscopy. Medical malpractice claims and risk management in gastroenterology and gastrointestinal endoscopy. http://www​.asge.org/gui​/resources/manual​/gea_risk_update_on_endoscopic.asp, Jan 8, 2002.
33.
Zhan C, Miller MR. Administrative data based patient safety research: a critical review. Qual Saf Health Care. 2003;12(Suppl 2):ii58–ii63. [PMC free article: PMC1765777] [PubMed: 14645897]
34.
Shojania K, Burton E, McDonald K. et al. Changes in rates of autopsy-detected diagnostic errors over time: a systematic review. JAMA. 2003;289:2849–56. [PubMed: 12783916]
35.
Gruver R, Fries E. A study of diagnostic errors. Ann Intern Med. 1957;47:108–20. [PubMed: 13435683]
36.
Kirch W, Schafii C. Misdiagnosis at a university hospital in 4 medical eras. Medicine. 1996;75:29–40. [PubMed: 8569468]
37.
Thomas E, Petersen L. Measuring errors and adverse events in health care. J Gen Intern Med. 2003;18:61–7. [PMC free article: PMC1494808] [PubMed: 12534766]
38.
Orlander J, Barber T, Fincke B. The morbidity and mortality conference: the delicate nature of learning from error. Acad Med. 2002;77:1001–6. [PubMed: 12377674]
39.
Bagian J, Gosbee J, Lee C. et al. The Veterans Affairs root cause analysis system in action. J Qual Improv. 2002;28:531–45. [PubMed: 12369156]
40.
Spear S, Bowen H. Decoding the DNA of the Toyota production system. Harvard Bus Rev 1999;106.
41.
Anderson R, Hill R. The current status of the autopsy in academic medical centers in the United States. Am J Clin Pathol. 1989;92:S31–S37. [PubMed: 2801621]
42.
The Royal College of Pathologists of Australasia Autopsy Working Party. The decline of the hospital autopsy: a safety and quality issue for health care in Australia. Med J Aust. 2004;180:281–5. [PubMed: 15012566]
43.
Plsek P. Section 1: evidence-based quality improvement, principles, and perspectives. Pediatrics. 1999;103:203–14. [PubMed: 9917464]
44.
Leape L. Error in medicine. JAMA. 1994;272:1851–7. [PubMed: 7503827]
45.
Reason J. Human error. New York, NY: Cambridge University Press; 1990.
46.
Gaither C. What your doctor doesn't know could kill you. The Boston Globe. Jul 14, 2002.
47.
Lambert B, Chang K, Lin S. Effect of orthographic and phonological similarity on false recognition of drug names. Soc Sci Med. 2001;52:1843–57. [PubMed: 11352410]
48.
Woloshynowych M, Neale G, Vincent C. Care record review of adverse events: a new approach. Qual Saf Health Care. 2003;12:411–5. [PMC free article: PMC1758034] [PubMed: 14645755]
49.
Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78:775–80. [PubMed: 12915363]
50.
Pansini N, Di Serio F, Tampoia M. Total testing process: appropriateness in laboratory medicine. Clin Chim Acta. 2003;333:141–5. [PubMed: 12849897]
51.
Stroobants A, Goldschmidt H, Plebani M. Error budget calculations in laboratory medicine: linking the concepts of biological variation and allowable medical errors. Clin Chim Acta. 2003;333:169–76. [PubMed: 12849900]
52.
Bates D. Medication errors. How common are they and what can be done to prevent them? Drug Saf. 1996;15:303–10. [PubMed: 8941492]
53.
Kassirer J, Kopelman R. Learning clinical research. Baltimore, MD: Williams & Wilkins; 1991.
54.
Pisarska M, Carson S, Buster J. Ectopic pregnancy. Lancet. 1998;351:1115–20. [PubMed: 9660597]
55.
Soderstrom, RM. Obstetrics/gynecology: ectopic pregnancy remains a malpractice dilemma. http://www​.thedoctors​.com/risk/specialty/obgyn/J4209.asp. 1999. Apr 26, 2004.
56.
Tay J, Moore J, Walker J. Ectopic pregnancy. BMJ. 2000;320:916–9. [PMC free article: PMC1117838] [PubMed: 10742003]
57.
Gracia C, Barnhart K. Diagnosing ectopic pregnancy: decision analysis comparing six strategies. Obstet Gynecol. 2001;97:464–70. [PubMed: 11239658]
58.
Helliker K, Burton TM. Medical ignorance contributes to toll from aortic illness. Wall Street Journal; Nov 4, 2003.
59.
Hagan P, Nienaber C, Isselbacher E. et al. The international registry of acute aortic dissection (IRAD): new insights into an old disease. JAMA. 2000;283:897–903. [PubMed: 10685714]
60.
Kodolitsch Y, Schwartz A, Nienaber C. Clinical prediction of acute aortic dissection. Arch Intern Med. 2000;160:2977–82. [PubMed: 11041906]
61.
Spittell P, Spittell J, Joyce J. et al. Clinical features and differential diagnosis of aortic dissection: experience with 236 cases (1980 through 1990). Mayo Clin Proc. 1993;68:642–51. [PubMed: 8350637]
62.
Loubeyre P, Angelie E, Grozel F. et al. Spiral CT artifact that simulates aortic dissection: image reconstruction with use of 180 degrees and 360 degrees linear-interpolation algorithms. Radiology. 1997;205:153–7. [PubMed: 9314977]
63.
Cabot R. Diagnostic pitfalls identified during a study of 3000 autopsies. JAMA. 1912;59:2295–8.
64.
Conveney E, Geraghty J, O'Laoide R. et al. Reasons underlying negative mammography in patients with palpable breast cancer. Clin Radiol. 1994;49:123–5. [PubMed: 8124890]
65.
Burstein, HJ. Highlights of the 2nd European breast cancer conference. http://www​.medscape.com​/viewarticle/408459. 2000.
66.
Berlin L. Malpractice issues in radiology: defending the “missed” radiographic diagnosis. Amer J Roent. 2001;176:317–22. [PubMed: 11159064]
67.
Cigarroa J, Isselbacher E, DeSanctis R. et al. Diagnostic imaging in the evaluation of suspected aortic dissection—old standards and new directions. NEJM. 1993;328:35–43. [PubMed: 8416269]
68.
McCarthy B, Yood M, Boohaker E. et al. Inadequate follow-up of abnormal mammograms. Am J Prev Med. 1996;12:282–8. [PubMed: 8874693]
69.
McCarthy B, Yood M, Janz N. et al. Evaluation of factors potentially associated with inadequate follow-up of mammographic abnormalities. Cancer. 1996;77:2070–6. [PubMed: 8640672]
70.
Boohaker E, Ward R, Uman J. et al. Patient notification and follow-up of abnormal test results: a physician survey. Arch Intern Med. 1996;156:327–31. [PubMed: 8572844]
71.
Murff H, Gandhi T, Karson A. et al. Primary care physician attitudes concerning follow-up of abnormal test results and ambulatory decision support systems. Int J Med Inf. 2003;71:137–49. [PubMed: 14519406]
72.
Iordache S, Orso D, Zelingher J. A comprehensive computerized critical laboratory results alerting system for ambulatory and hospitalized patients. Medinfo. 2001;10:469–73. [PubMed: 11604784]
73.
Kuperman G, Boyle D, Jha A. et al. How promptly are inpatients treated for critical laboratory results? JAMIA. 1998;5:112–9. [PMC free article: PMC61280] [PubMed: 9452990]
74.
Schiff G, Kim S, Wisniewski M. et al. Every system is perfectly designed to: missed diagnosis of hypothyroidism uncovered by linking lab and pharmacy data. J Gen Intern Med. 2003;18(suppl 1):295.
75.
Kuperman G, Teich J, Tanasijevic M. et al. Improving response to critical laboratory results with automation: results of a randomized controlled trial. JAMIA. 1999;6:512–22. [PMC free article: PMC61393] [PubMed: 10579608]
76.
Poon E, Kuperman G, Fiskio J. et al. Real-time notification of laboratory data requested by users through alphanumeric pagers. J Am Med Inform Assoc. 2002;9:217–22. [PMC free article: PMC344581] [PubMed: 11971882]
77.
Schiff GD, Klass D, Peterson J. et al. Linking laboratory and pharmacy: opportunities for reducing errors and improving care. Arch Intern Med. 2003;163:893–900. [PubMed: 12719197]
78.
Massachusetts Coalition for the Prevention of Medical Error. Communicating critical test results safe practice. JCAHO J Qual Saf 2005;31(2) [In press] [PubMed: 15791766]
79.
Berwick D. Eleven worthy aims for clinical leadership of health system reform. JAMA. 1994;272:797–802. [PubMed: 8078145]
80.
Dawson N, Arkes H. Systematic errors in medical decision making: judgment limitations. J Gen Intern Med. 1987;2:183–7. [PubMed: 3295150]
81.
Bartlett E. Physicians' cognitive errors and their liability consequences. J Health Care Risk Manag. 1998;18:62–9. [PubMed: 10537844]
82.
McDonald J. Computer reminders, the quality of care and the nonperfectability of man. NEJM. 1976;295:1351–5. [PubMed: 988482]
83.
Smetzer J, Cohen M. Lesson from the Denver medication error/criminal negligence case: look beyond blaming individuals. Hosp Pharm. 1998;33:640–57.
84.
Fischhoff B. Hindsight [does not equal] foresight: the effect of outcome knowledge on judgment under uncertainty. J Exp Psychol: Hum Perc Perform. 1975;1:288–99. [PMC free article: PMC1743746] [PubMed: 12897366]
85.
Hart JT. Cochrane Lecture 1997. What evidence do we need for evidence based medicine? J Epidemiol Community Health. 1997 Dec;51(6):623–9. [PMC free article: PMC1060558] [PubMed: 9519124]

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this page (247K)

Other titles in this collection

Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to PubMed

Similar articles in PubMed

See reviews...See all...

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...