Background: Diagnostic errors probably cause 40,000-80,000 preventable deaths annually in US hospitals alone, and these estimates fail to account for mortality from ambulatory misdiagnosis and non-lethal morbidity due to diagnostic error. Despite their major public health impact, diagnostic errors have received relatively little scientific attention. We sought to further characterize the health outcomes and economic consequences of misdiagnosis in the US through analysis of closed malpractice claims.
Methods: We analyzed misdiagnosis-related claims occurring during a 20-year period (1986-2005) from The National Practitioner Data Bank. We describe error type, outcomes, payments, sources, and geographic distribution. All payment values are reported in 2010 dollars after adjustment for inflation using medical care Consumer Price Index conversion factors (US Bureau of Labor Statistics). Average claims per physician were calculated using American Medical Association physician census data available for selected years (1990, 2000).
Results: Among 11 core malpractice allegation types, diagnostic errors were the leading type (29.1%; n=91,082) and accounted for the highest proportion of total payments (35.6%). The mean number of claims per 1000 US physicians was 8.3 (1990) and 6.3 (2000). The most frequent classes of diagnostic error were failure to diagnose (54%), delay in diagnosis (19%), and wrong diagnosis (10%). The most frequent outcomes were death (41%), significant permanent injury (17%), major permanent injury (13%), and minor permanent injury (10%). The inflation-adjusted, 20-year sum of misdiagnosis-related payments was $34.5 billion (mean-$378,858; median-$208,650; interquartile range $72,250-$472,000). Claims were paid primarily by malpractice insurers (85.9%). The highest total payment sum was in the state of New York ($5.4 billion), and 50% of the total payments over the 20-year period occurred in six states (NY, PA, FL, IL, CA, TX). Total payments for non-lethal injuries accounted for 60%. Per-claim payments for permanent morbidity that was “major” (mean-$568,414; median-$354,000) or “significant” (mean-$417,214; median-$271,950) exceeded those where the outcome was death (mean-$387,899; median-$252,350). The top per-claim payments (mean$825,505; median-$574,200) were for the outcome “quadriplegic, brain damage, lifelong care” (4.3% of cases, 9.2% of total payouts).
Conclusion: Our results indicate that, among malpractice claims, diagnostic errors are the most frequent and most costly of all medical mistakes. We found roughly equal numbers of lethal and non-lethal errors in our analysis, suggesting that the public health impact of diagnostic errors may be far greater than previously imagined. Stakeholders including patients, insurers, hospital systems, and federal agencies have a vested interest in making diagnostic error reduction a patient safety priority.
Background: A substantial proportion of diagnostic errors in medicine can be attributed to faults in physician’s cognitive processes. Research on medical expertise suggests that physician’s reasoning may be susceptible to bias, such as confirmation bias (i.e., the tendency to seek information to support rather than refute a hypothesis). The present study attempts to explore confirmatory tendencies in medical diagnosis. It is hypothesized that physicians tend to focus on and therefore report more clinical features that support a diagnosis that has been suggested to them, while ignoring features that speak against the suggested diagnosis.
Methods: Thirty-eight internal medicine residents accepted or rejected suggested diagnoses on 4 written randomly presented clinical cases. Two of those suggested diagnoses, which were also randomized among the cases, were correct and 2 were incorrect for each participant. After evaluating each suggestion, participants reported the features they would mention to a supervisor while discussing the patient. It was hypothesized that the reported features would be corroborative of the suggested diagnoses.
Results: Participants had more trouble rejecting incorrect suggestions (correct evaluation score on cases with incorrect suggestions: M = .63, SD = .59) than accepting correct suggestions (M = 1.13, SD = .70), t(37) = 2.84, p < .05, d = .77. Irrespective of the correctness of the suggested diagnosis, a significantly higher percentage of features supporting the suggested diagnoses (M = 39.05, SD = 12.70) was reported than features supportive of the alternative diagnosis (M = 25.25, SD = 13.33), t(37) = 4.40, p < .05, d = 1.06. This reporting was irrespective of the participants’ diagnostic decisions. Furthermore, when incorrect suggestions were accepted, more features supportive of these suggestions were reported than when they were rejected, t(49) = 1.88, p < .05, d = .53, which may indicate a confirmation bias.
Conclusion: These findings demonstrate that the tendency to confirm suggested diagnoses is mediated by a focus on features that support these suggestions. This tendency may lead to diagnostic errors when the suggestion is incorrect.
Background: Expert reasoning strategies for identifying and correctly using the right information and knowledge are currently developed over the course of many years of practice. A formal process for capturing and disseminating expert reasoning could significantly shorten the learning curve and change the diagnostic process. We have begun to apply such a process – called Thought Process Optimization ® (TPO) – which was previously applied in the financial and engineering domains, to medical reasoning.
Methods: TPO captures and disseminates the reasoning strategies of domain experts, referred to as their Thought Process Models ™ (TPMs). Through a series of interviews, information is obtained about how an expert thinks. Following each interview the TPO consultant identifies and adds the expert’s reasoning to the TPM, refines the TPM based on the expert’s feedback and establishes common semantics using a “One-Term One- Meaning” process. Disseminating the expert’s reasoning to learners takes a few hours to a few days. In this feasibility study, the investigators introspectively performed the TPO process on two of their own members: an experienced emergency physician (focusing on initial assessment of patients presenting to the ED) and an experienced internist (focusing on assessment of challenging inpatient presentations).
Results: In the first case study, the emergency medicine TPM was elicited, and was reported to be easily transferred, understood and adopted by medical students and residents who informally reported positive changes in their diagnostic process and immediate improvements to their clinical reasoning. In the second case study, after the internal medicine TPM was developed, the expert engaged in a member verification process and confirmed that the TPM not only accurately characterized his thinking, but also provided insights into his own reasoning processes and how they differ from those of other physicians and the residents he trains. The TPM highlighted the important role of rapid automatic thinking in this expert’s diagnostic reasoning. It emphasized the importance of comparing patient and physician perceptions about the nature and severity of the illness in order to identify cases in which diagnostic errors may be more likely.
Conclusion: Methods developed to capture and disseminate reasoning in other fields can be applied to clinical reasoning in medicine to reduce diagnostic errors. Additional research will replicate these findings more formally. Future development will store each physician’s TPM in a cloud software application where reasoning strategies can be examined, replicated, updated and shared with others regardless of geographical location.
Background: Checklists have been useful in improving safety and reducing error in many arenas. Recent literature suggests that a diagnostic time-out and a diagnostic checklist may reduce diagnostic error. High fidelity simulation has been used to teach critical thinking skills and cognitive forcing strategies. I hypothesize that by educating learners on diagnostic error, teaching them to take a diagnostic time out, use a diagnostic checklist (a novel mnemonic“Investigators THINK”) and practice it in a simulated setting, learners will be aware of when to question a diagnosis and apply the novel mnemonic to prevent cognitive errors and improve diagnostic accuracy.
Methods: A pilot session with 15 interns and third year medical students was done in a limited one hour teaching session. The format had three components: a case-based discussion illustrating premature closure of diagnosis, didactic session of diagnostic errors, and a high fidelity simulated scenario. The didactic session also introduced the concept of a diagnostic time out and a diagnostic checklist, a novel mnemonic, “Investigators THINK” developed from John Murtagh’s chapter, “A Safe Diagnostic Strategy”. Participants were given a cognitive aid for a diagnostic timeout and use the checklist “Investigators THINK”. Four of 15 participants completed all components of the pilot which had a pre-experimental one group pre and post-test design.
Results: Evaluative measures included surveys before and after the session and quantitative measures of the participants’ use of the checklist and his/her diagnostic accuracy. The 4 participants who completed all components demonstrated a trend toward an increase in ability to describe situations that predispose clinicians to making a diagnostic error (p=0.08) and describe common errors in arriving at a clinical diagnosis (p=0.09). The participants did not achieve diagnostic accuracy, but initially used the Investigators THINK mnemonic to expand their differential diagnosis in the scenario. Participants gave qualitative feedback in the debriefing and post-surveys, and they reported session time was short.
Conclusion: Initial limited pilot data suggest that a diagnostic time-out and diagnostic checklist, the novel mnemonic “Investigators THINK”, may be useful in raising awareness to prevent cognitive error. Next steps include a qualitative analysis of "think-out-loud" video footage from this and subsequent simulated scenarios to look at themes reflective of critical thinking skills. Subsequent sessions will have longer session times, pre-session simulations and 3 month post-surveys to assess practical use of the THINK checklist. Future work will include a randomized control trial to assess the effect on diagnostic accuracy.
Statement of problem: Prompt, appropriate follow-up of cancer-related abnormal test results is essential. Yet critical diagnostic lab and imaging test results do not always receive timely follow-up even when provider notification occurs through electronic health records (EHRs). In order to prevent diagnostic delays in cancer, better functionality is needed in the EHR to support tracking and reminding for follow-up actions.
Description of the intervention or program: We developed a 3-part functional software prototype that works with the VA’s EHR to prompt and track follow-up actions taken in response to certain critical test result alerts: 1) A Follow-up Action Tracker that monitors electronic documentation to see if critical test result alerts for four cancer-related tests (abnormal chest x-rays, PSAs, FOBTs, and mammograms) have received follow-up. The tracker suggests order sets of appropriate follow-up actions in a separate pop-up window, taking care to fit with provider workflow and minimize disruptions. For example, follow-up actions suggested for an abnormal chest X-ray may include patient notification (letter or call), ordering another imaging test (chest CT), consulting a subspecialist (pulmonologist), hospitalization, or an option indicating no further action is required (e.g., patient already in hospice care). 2) A Critical Alert Monitor that automatically identifies the total number of critical test alerts generated and their status, i.e. acknowledged or acted upon. 3) A Critical Alert Reporting Engine that allows clinic administrators and individual providers to visualize detailed information collected by the other two components.
Findings to date: System development used a socio-technical approach that involved identification of organizational, workflow, and technical constraints related to test result follow-up. Iterative reviews were conducted with various stakeholders including primary care providers, trainees, safety managers, and IT personnel, informing design revisions. Usability testing with 24 providers showed that the ordering and documentation options window in the Follow-up Action Tracker was easy to use. Providers responded favorably and quickly realized how the new functionality could reduce missed follow-up in the outpatient setting. To ensure the efficacy of the Monitor and Reporting Engine, we conducted usability testing with 9 organizational stakeholders including clinical IT staff, resulting in design improvements.
Lessons learned: High-reliability test result tracking systems are needed to overcome the limitations of current EHRs in ensuring safety of test result follow-up. Such systems need to be developed using a multifaceted socio-technical approach that accounts for technology, workflow, personnel and organization. Once implemented, these systems have the potential to rapidly and efficiently identify patients at risk of harm from diagnostic delays.
Statement of problem: The test menu in the clinical laboratory now includes thousands of assays, and inappropriate selection of coagulation tests is a frequent source of error.
Description of the intervention or program: Beginning July 2010, our hospital instituted a coagulation diagnostic management team (DMT) to provide 1) expert driven, patient specific interpretations of test results in the context of the clinical situation and 2) panel testing for comprehensive evaluation for specific disorders resulting in bleeding or thrombosis. In order to assess impact on patient care, a one-week prospective clinical audit monitored all testing in the esoteric coagulation laboratory. All communications, including corrections of test selection, were recorded. After 30 days, a retrospective review of the medical records was performed for assay results, interpretation, and impact on patient care.
Findings to date: Interpretative reports submitted for 53 patient cases, including more than 250 separate coagulation assays, were reviewed in the first several weeks of DMT activity and before testing panels were widely used by clinicians. Seventy percent of cases were being managed by specialists outside the field of hematology. The most common clinical indications for testing included evaluation for a hereditary causes of thrombosis (30.2%), followed by antiphospholipid antibodies (28.3%), von Willebrand disease (17.0%), and disorders of platelet function (9.4%). Thirty-three errors were identified in 31 cases (63%) while 22 cases had no detectable error. Errors in test selection (34%) were most common and usually attributable to omission of a single assay of part of a panel. Errors in patient follow-up were also common. At the 30 day follow-up time point, 23% of cases had abnormal results and the patient’s medical record contained no comment on the interpretation or plan for follow-up, or an evidence-based recommended action in the interpretation had not been performed. Sample collection errors were not uncommon (7.5%). Of the errors noted before the performance of assays (21 out of 33), 16 were recognized and corrected by the DMT prior to release of results (76%), including 10 cases of test selection error. In two of these cases, the omitted assay was critical for diagnosis.
Lessons learned: Expert interpretive reports through diagnostic management teams can reduce errors and increase patient safety. The error rate of test selection is high and can be reduced via laboratory monitoring and panel based testing.