Meeting Brochure and registration form      SMDM Homepage

Saturday, 22 October 2005
38

DEVELOPMENT OF A GRAPHIC TOOL TO ASSESS THE INFLUENCE OF HISTORICAL DATA IN BAYESIAN ESTIMATION OF DIAGNOSTIC TEST PARAMETERS

Stephane Guillossou, DVM, Muriel Rabilloud, MD, PhD, and René Ecochard, MD, PhD. Hospices Civils de Lyon, Lyon, France

Purpose: Decision making relies often partially on diagnostic tests interpretation. Unfortunately, it is well admitted that there is no perfect reference test. Bayesian approach for latent class analysis has been one of several statistical methods proposed to estimate tests parameters such as sensitivity and specificity. One critical step of this approach is determination of informative priors. These have to be identified and built from historical data in order to avoid bias in estimation and inference. The weight of these priors cannot be easily appreciated. Methods: A prior distribution may be seen as the product of a non-informative prior distribution by the likelihood of historical data. The power-prior approach consists in raising to a power varying from zero to one the likelihood of historical data in order to vary the weight of historical data on the global prior distribution. Existing data from a comparative study of two diagnostic tests are used to study the influence of the power prior on the mean estimations and credibility intervals of the population prevalence as well as the sensitivities and specificities of the two tests. These two tests measure the titer of neutralizing antibody against rabies virus in vaccinated dogs and cats in order to identify which are protected. Simulations creating hypothetic data sets with different prevalences were also used. Results: Posterior parameter distributions, described by their means and credibility intervals, are estimated in function of different power priors. The resulting curves are used to determine the cutoff where power priors affect posterior estimations by more than ±5% compared to the values obtained with a power prior of zero. Cutoffs for different parameters are compared. The available data set, confirmed by simulations, shows that an unbalanced data set could lead to one test-parameter estimation that is very sensitive to prior information whereas others are not. Conclusions: This original approach shows to be a useful and easy-to-handle tool to assess the weight of historical data on Bayesian estimation of diagnostic test performances. This tool is sensitive enough to highlight the parameters that would be moderately to strongly influenced by historical data. Critical informative prior distributions could then be easily identified and carefully chosen using literature review and experts' knowledge.

See more of Poster Session I
See more of The 27th Annual Meeting of the Society for Medical Decision Making (October 21-24, 2005)