Meeting Brochure and registration form      SMDM Homepage

Tuesday, 17 October 2006 - 11:45 AM


Jeremy D. Goldhaber-Fiebert, AB1, Jane J. Kim, PhD2, Karen M. Kuntz, ScD2, Joshua A. Salomon, PhD3, Natasha K. Stout, Ph.D.2, Henri Folse, AM2, and Sue J. Goldie, MD, MPH2. (1) Harvard University, Boston, MA, (2) Harvard School of Public Health, Boston, MA, (3) Harvard School of Public Health, Cambridge, MA

Purpose: To calibrate a US-based, natural history model of human papillomavirus (HPV) and cervical cancer.

Methods: We developed a microsimulation model of the natural history of HPV infection, multiple levels of precancerous lesions, and invasive cervical cancer. The model incorporated multiple HPV types characterized by prevalence, incidence, persistence, acquired immunity, and oncogenicity. Because of multiple uncertain or unobserved input parameters and sparse longitudinal data on outcomes, we reviewed published literature and study data and performed random-effects meta-analyses to generate combined estimates and confidence intervals for a variety of different epidemiologic outcomes to be used as calibration targets. We randomly sampled input parameter sets from joint uniform distributions, used the model to predict outcomes based on these inputs, and compared the outcomes to our calibration targets. The goodness of fit (GOF) score of each input set was based upon the distance of each output from its respective target's combined mean relative to the width of the 95% confidence interval. Using a chi-square test (p=0.05) to compare a given GOF score to the best GOF score, we identified multiple ‘good fitting' parameter combinations that were not statistically distinguishable from the best-fitting set. We repeated the calibration procedure multiple times to assess the impact of Monte Carlo noise on both the choice of good-fitting sets and key model-predicted outcomes.

Results: More than 50 sources provided data for the meta-analyses to estimate combined targets and confidence intervals. Over 600,000 candidate input parameter sets were generated and simulated with 100,000 individuals each. Model results from each parameter sets were compared to appropriate targets. Approximately 500 good fitting parameter sets were identified. We found that Monte Carlo noise had only limited effect on which input sets were designated as good fitting but had more effect on model-predicted rare events.

Conclusions: The calibration method used demonstrates that it is possible to identify multiple parameter sets that produce outcomes with reasonable fidelity to observed data in the US. The multiple sets identified can be used to explore the effects of joint parameter uncertainty on outcomes in analyses of effectiveness and cost-effectiveness.

See more of Concurrent Abstracts H: Simulation and Modeling
See more of The 28th Annual Meeting of the Society for Medical Decision Making (October 15-18, 2006)