Tuesday, October 25, 2016: 10:30 AM - 12:00 PM
Bayshore Ballroom Salon F, Lobby Level (Westin Bayshore Vancouver)

Mark S. Roberts, MD, MPH
University of Pittsburgh School of Medicine

10:30 AM

Yao-Hsuan Chen, Ph.D.1, Daniel Brachey, B.S.2, Matthew Farkas, B.S.2, Shabbir Ahmed, Ph.D.3, Joel Sokol, Ph.D.2, Paul G. Farnham, Ph.D.1, Brian M. Gurbaxani, Ph.D.1 and Stephanie L. Sansom, PhD, MPP, MPH1, (1)Centers for Disease Control and Prevention, Atlanta, GA, (2)School of Industrial & Systems Engineering, Georgia Institute of Technology, Atlanta, GA, (3)School of Industrial & Systems Engineering, Atlanta, GA


Modelers can improve unsatisfactory model calibration results by changing the requirements for model parameters or outcomes to be calibrated, which can increase model uncertainty. In this study, we show that the implementation of the Optimization algorithm-based Calibration Approach (OCA) can significantly improve calibration results without increasing model uncertainty.


Using OCA, modelers first transform a calibration task into an optimization problem. In the problem, the objective function quantifies the calibration gap (distance between model outcomes and calibration targets), and constraint functions enforce calibration requirements, such as maintaining feasible bounds of calibration parameters. Modelers then choose an optimization algorithm to minimize the calibration gap, keeping all feasible calibration sets for uncertainty analysis, and report the optimal calibration set for future base-case analysis.

We followed this procedure to calibrate the HIV Optimization and Prevention Economics (HOPE) model, a compartmental model of HIV disease progression and transmission in the United States. We calibrated 123 out of 870 parameters to fit three model outcomes, prevalence, incidence, and deaths, to their targets. We compared the difference in calibration precision when the standard OCA procedure was altered by the choice of (1) the optimization algorithm—pattern search (PS) algorithm versus simulated annealing (SA) algorithm; (2) the search starting point—random point versus good point (low objective function value); and (3) the search strategy—general search over all calibrated parameters versus concentrated search over prioritized parameters. We used the best calibration solution from the Latin Hypercube algorithm and one-way sensitivity analysis results around this solution to inform (2) and (3), respectively.


Table 1 summarizes the results collected from running each of the 6 settings of OCA for no more than 48 hours to solve the calibration problem. Although the choice of optimization algorithm and starting point did not seem to significantly impact the calibration precision, the concentrated search strategy based on insights from the one-way sensitivity analysis of calibrated parameters significantly improved the calibration performance, decreasing the gaps between model outcomes and their corresponding targets shown in the standard OCA by 61% ((4.75-1.85)/4.75) and 51% ((5.68-2.81)/5.68) for PS and SA, respectively.


Modelers should explore how different OCA options can close the calibration gap before resorting to relaxing calibration requirements, such as altering the model outcome target values.

10:45 AM

Stavroula Chrysanthopoulou, PhD, University of Massachusetts Medical School, N. Worcester, MA

   The purpose of this study is to discuss statistical methods for calibrating and assessing the predictive accuracy of continuous time, dynamic Microsimulation Models used in Medical Decision Making.


   We apply two fundamental approaches, a Bayesian and an Empirical one, to calibrate the Microsimulation Lung Cancer (MILC) model, a streamlined MSM that describes the natural history of lung cancer and predicts important outcomes, such as lung cancer incidence and mortality. We compare these two methods in terms of theoretical aspects, the potential overlap in the resulting values, as well as the validity of the predictions of the final calibrated model each one produces.

   Furthermore we discuss statistical methods for an important yet rather overlooked aspect of MSMs, namely the assessment of the predictive accuracy of this type of models. In particular, we run a simulation study to compare the individual predictions from the calibrated MILC model with simulated outcomes, using C-statistics, a group of methods that has been widely used for assessing the predictive accuracy of survival models. We also compare the performance of C-statistics with other methods aimed at testing the deviations of survival distributions predicted by the calibrated MSM from the simulated truth.


   While Empirical Calibration methods prove more efficient, Bayesian methods seem to perform better especially when calibration targets involve rare outcomes. C-statistics are not very sensitive in capturing deviations of the individual predictions from the simulated truth.  Methods based on the comparison of predicted with observed survival distributions prove more effective for assessing the predictive accuracy of continuous time MSMs.


   An effective calibration procedure of an MSM should combine an Empirical approach, for a more efficientl specification of plausible values for the model parameters, with a Bayesian method that will provide more accurate results choosing appropriate starting values from the previously defined ranges. In addition, techniques based on the comparison of predicted with observed survival curves seem to outperform C-statistics with regards to the assessment of the accuracy of the individual predictions received from a continuous time MSM.

11:00 AM

Christoph Zimmer, PhD, Reza Yaesoubi, PhD and Ted Cohen, DPH, MD, MPH, Yale School of Public Health, New Haven, CT


During the period of initial emergence of novel pathogens, the accurate estimation of key epidemic parameters (such as expected number of secondary cases) is challenging because observed metrics (e.g. the number of pathogen-associated hospitalizations) only partially reflect the true state of the epidemic. Stochastic transmission dynamic models are especially useful for guiding decisions during the emergence of novel pathogens given the importance of chance events and the fluctuations in observations when the number of infectious individuals is small. Our goal is to develop and evaluate a method for real-time calibration of stochastic compartmental models using observed, but likely imperfect, epidemic data.


We develop a calibration method, called Multiple Shooting for Stochastic systems (MSS), that seeks to maximize the likelihood of the epidemic observations. MSS applies a linear noise approximation to describe the size of the fluctuations, and uses each new surveillance observation to update the belief about the true epidemic state. Using simulated novel viral pathogen outbreaks (Figure A), we evaluate our method's performance throughout epidemics of various magnitudes and host population sizes. In this analysis, we assume that the weekly number of new diagnosed cases is available and serves as an imperfect proxy of disease incidence. We further compare the performance of MSS to that of three state-of-the-art and commonly used benchmark methods; Method A: a likelihood approximation with an assumption of independent Poisson observations; Method B: a particle filter method; and Method C: an ensemble Kalman filter method. We use Wilcoxon Signed-Rank test to evaluate the hypothesis that the median of relative errors for MSS is smaller than that of the benchmark methods.


Our results (Figure B-D) show that MSS produces accurate estimates of basic reproductive number R0, effective R0, and the unobserved number of infectious individuals throughout epidemics. MSS also allows for accurate prediction of the number and timing of future cases and the overall attack rate (Figures E-F). The p-values displayed in Figures B-F confirms that for the majority of scenarios studied here, MSS statistically outperforms the three competing benchmark methods.


MSS improves on current approaches for model-based parameter estimation and prediction for epidemics and may thus allow for policy makers to respond more effectively and use resources more efficiently in the face of emerging epidemic threats.