A QUALITATIVE REVIEW OF DIFFERENTIAL DIAGNOSIS GENERATORS

Monday, October 25, 2010
Vide Lobby (Sheraton Centre Toronto Hotel)
William F. Bond1, Linda M. Schwartz1, Kevin R. Weaver1, Donald Levick1, Michael Giuliano2 and Mark L. Graber3, (1)Lehigh Valley Health Network, Allentown, PA, (2)Hackensack University Medical Center, Hackensack, NJ, (3)VA Medical Center, Northport, NY

Background:  Differential diagnosis (DDX) generators have existed for some time, but their use has not been widely adopted in practice.  We identified and described the features of a current list of DDX generators.

Methods:  We performed a Google search and a literature search using a series of subject headings (MESH) and keywords to identify programs that qualify as differential diagnosis generators.  Through consensus, the author group identified four factors critical for a differential diagnosis generator to be useful.  First, the program needed to present a list of potential diagnoses rather than text or article references. Second, the program must rank or indicate critical diagnoses that need to be considered or eliminated.  Third, the program needed to accept at least two signs, symptoms or disease characteristics. Finally, the program needed to provide the ability to compare the clinical presentation of the different diagnoses presented.  The study was limited to programs providing diagnoses in general medicine.  Programs focused on one disease process or clinical specialty were excluded.  The study was limited to programs developed for the use of healthcare professionals (HCPs), not patients or consumers.  Qualitative evaluation criteria were agreed upon by consensus prior to evaluating their use.

Results:  Eleven programs were excluded due to specialty specific focus.   Another seven programs were excluded after an initial review for reasons that included: inability to compare diagnoses, to enter two symptoms or characteristics, or to rank diagnoses, and generators that were simply a static tree structure with cross linking of internal reference points.  Five programs were reviewed fully with the following evaluation criteria: cost model; electronic health record (EHR) integration; input method; mobile access; filtering and refinement; screen display (output); lab values, medications, and geography as diagnostic factors; evidence based medicine (EBM) content; outcome measures; references; drug information content source; updating frequency; usage tracking ability; and availability of continuing medical education credits for use.  When information was not available to the end user, the company producing the software was queried for clarification.

Conclusion: The programs were useful in presenting and ranking possible diagnoses.  Links to both EBM and non-EBM content were plentiful.  Our ability to test EHR integration was limited.   The DDX generators should prove helpful teaching tools.  Use in practice will depend on EHR integration and the number of false alarms generated.