AUTOMATING THE QUALITY ASSURANCE REVIEW PROCESS

Monday, October 24, 2011
Poster Board # 1
(Innovations in Practice Management should describe programmatic improvements in the delivery of healthcare that are related to diagnostic error in medicine and should include information that allows session attendees to evaluate the replicability of such programs at their institutions. Each abstract should be 400 words or less, have a descriptive title, and the following 4 sections: statement of problem, description of the intervention or program, findings to date, and lessons learned; may include 1 table or figure. ) Innovations in Practice Management

Philip D. Anderson, MD, Marie-France Petchy, MD, Jonathan A. Edlow, MD and Lawrence Mottley, MD, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA

Statement of problem: Quality Assurance (QA) review of patient visits to the emergency department is essential for insuring that the quality of care meets established standards and is an important mechanism for detecting errors or conditions that predispose to error.  QA review involves several steps: 1) identifying cases for review; 2) distributing cases to appropriate reviewers; 3) reviewing the case; 4) communicating review results back to departmental leadership for discussion and action; 5) archiving results for retrospective analysis and reporting.  Carrying out these steps manually is labor intensive and limits the number of cases that can be reviewed systematically given fixed administrative resources. 

Description of the intervention or program: We developed a secure, web-based software interface to automate the QA case review processes.  Using predefined search criteria, the application performs nightly queries of the previous days ED visit log to identify cases for review, which are then randomly assigned to reviewers who receive email notification of a pending case in their queue.  Reviewers then log into the application remotely, review scanned case documentation and OMR records and then score the case on the likelihood of error or adverse events in the case.  Cases with any likelihood of error or adverse events are then discussed in the departmental bimonthly QA meeting where the entire reviewer panel makes a consensus decision on whether errors or adverse events occurred.  All findings are then recorded in the application database for retrospective review and reporting. 

Findings to date: Since implementation in July 2008 a total of 1447 cases have been reviewed.  Implementation of the automated QA review system has resulted in a greater standardization of QA review processes whereby all cases receive formal scoring on the likelihood of error and adverse events by an individual reviewer and then a final determination by the consensus panel.  All of the tasks performed by the application are estimated to have saved 8 hours of work per week by administrative support staff alone, as well as increased efficiency of reviewer activities by allowing remote electronic reviews instead of paper-based reviews.  The electronic storage of all review findings is building a research databased that will provide additional benefits as the number of cases reviewed increases. 

Lessons learned: Many QA review processes can be successfully automated, thereby increasing standardization of reviews, using administrative resources more efficiently, while facilitating research into factors associated with errors.