Meeting Brochure and registration form      SMDM Homepage

Monday, 24 October 2005
38

TAMING TABLE 1: MAKING MODELING ASSUMPTIONS MORE TRANSPARENT THROUGH A SENSITIVITY ANALYSIS BASED ON STRENGTH OF EVIDENCE

R. Scott Braithwaite, MD, MSc, Yale University / VA Connecticut Healthcare System, West Haven, CT, Amy C. Justice, MD, PhD, Yale University / VA Connecticut Healthcare Center, West Haven, CT, and Mark S. Roberts, MD, MPP, University of Pittsburgh, Pittsburgh, PA.

PURPOSE: Policy makers often have difficulty determining the validity of modeling assumptions, and are therefore reluctant to translate their results into practice. We propose a method that makes explicit the potential tradeoff between strength of evidence and precision of results.

METHOD: For illustrative purposes, we created a simple 9-parameter Monte-Carlo simulation of a HIV adherence intervention based on hypothetical data. Data sources were only included if they met or exceeded validity criteria, otherwise uniform distributions over plausible ranges were substituted. We varied validity criteria using definitions concordant with evidence-based medicine literature. We assumed that 5 parameters had excellent internal validity (randomized controlled data), 3 had good internal validity (observational data); and 0 had fair internal validity (expert opinion). We assumed that 5 parameters had excellent external validity (identical population or pooled estimates from similar populations), 1 had good external validity (single estimate from similar population), and 2 had fair external validity (expert opinion or estimate from dissimilar population).

RESULTS: With the least stringent validity criteria (fair or better internal and external validity), 92.7% of simulations showed a positive effect, and 0%, 84.9%, and 85.4% were cost-effective at willingness to pay thresholds of $20,000/QALY, $50,000/QALY, and $100,000/QALY. With the most stringent validity criteria (internal and external validity excellent), the confidence ellipse became wider (Figure), only 34.1% of simulations showed a positive effect, and only 26.3%, 27.3%, and 31.4% were cost-effective at willingness to pay thresholds of $20,000/QALY, $50,000/QALY, and $100,000/QALY.

CONCLUSIONS: The accuracy and precision of model results may vary dramatically depending upon the stringency of validity criteria for model inputs. Making this uncertainty more explicit may help policy makers know whether to “trust the model.”


See more of Poster Session III
See more of The 27th Annual Meeting of the Society for Medical Decision Making (October 21-24, 2005)