Meeting Brochure and registration form      SMDM Homepage

Monday, 24 October 2005 - 1:30 PM

AUTOMATION BIAS IN MEDICAL DECISION MAKING: A STUDY WITH INCORRECT COMPUTER PROMPTING IN BREAST CANCER SCREENING

Eugenio Alberdi, PhD, Andrey Povyakalo, PhD, Lorenzo Strigini, MEng, and Peter Ayton, PhD. City University, London, United Kingdom

PURPOSE: To investigate the effects of inappropriate computer advice on the decisions of clinicians. The investigation concerns Computer Aided Detection (CAD) for mammography. In order to reduce the likelihood of oversights by mammogram readers, CAD tools put marks ("prompts") on features of mammograms which are potential indicators of cancer. The study focused on cases in which the computer failed to detect cancers, either placing prompts elsewhere in the mammogram, or no prompt at all.

METHOD: 39 experienced mammogram readers (radiologists, radiographers and breast clinicians) were asked to examine the mammograms of 60 patients and to decide whether each patient should be recalled for further investigation. In the experimental condition, 20 of the participants examined the mammograms with computer aid; in the control condition, the remaining 19 participants saw the same cases but without computer aid. The mammograms of 30 of the patients contained signs of cancer and the other 30 were “normal cases” (no signs of cancer). The test set was designed to contain a large proportion (50%) of cancers for which the computer had generated incorrect output.

RESULTS: The average proportion of cancers (52%) recalled by the participants in the experimental condition (using CAD) was significantly lower than the average proportion of cancers (68%) recalled by the control group (ANOVA p<0.001). The difference was most marked for cancer cases for which the computer generated no prompts (46% vs.88%; ANOVA p<0.000001).

CONCLUSIONS: Our results strongly suggest that the readers using CAD were biased by the incorrect computer output. The effects are similar to those reported in the human factors literature as “automation bias”: situations in which, because an automated tool fails to flag an event, a person also fails to take appropriate action. While these effects are typically reported in studies conducted with students working in laboratory settings, our participants were experts working in a relatively realistic setting relevant to their area of expertise. In our study, the participants using CAD appeared to use absence of computer prompts on ambiguous cases as support for "no-cancer" decisions. Arguably, this is a rational strategy for decisions about normal cases, since unprompted cases are nearly always normal. However it can damage decisions for unprompted, difficult-to-detect cancers, with potentially serious consequences for patients.


See more of Oral Concurrent Session M - Technology Assessment
See more of The 27th Annual Meeting of the Society for Medical Decision Making (October 21-24, 2005)