METHODS TO ANALYZE AND QUANTIFY UNCERTAINTY AND SENSITIVITY IN MATHEMATICAL MODELS TO AID DECISION MAKING IN HEALTH AND MEDICINE
Radboud J. Duintjer Tebbens, PhD1, Kimberly Thompson, ScD2, M.G. Myriam Hunink, PhD, MD3, Thomas A. Mazzuchi, PhD1, Daniel Lewandowski1, Dorota Kurowicka1, and Roger M. Cooke1. (1) Delft University of Technology, Delft, Netherlands, (2) Harvard University, Boston, MA, (3) Erasmus Medical Center, Rotterdam, Netherlands
Background: Mathematical models provide helpful tools to inform decision makers facing choices between public health interventions. However, model inputs typically remain uncertain and for a model to be most informative, analysts should strive to factor in uncertainty rather than presenting just single point estimates. Many methods exist to analyze and quantify the importance of model output uncertainty and to identify the inputs that represent the source of this output uncertainty. However, application of these methods requires understanding that not all methods might yield the same insights, and that the type of characterization of model input uncertainty may also influence the results. To demonstrate the important considerations in choosing a method, we review and discuss a number of useful methods for uncertainty and sensitivity analysis. Methods: We perform several methods for uncertainty and sensitivity analysis on an existing dynamic economic evaluation model for a hypothetical vaccination program. This includes partial derivatives, one-way and multi-way sensitivity analyses, design-of-experiments analyses (yielding main effects and interactions), Morris' method (yielding mean elementary effects), and probabilistic analyses (yielding different regression and correlation-based measures of sensitivity). Results: Of 10 methods that we applied to the model assuming independent uniform distributions for the inputs, only 3 yielded identical input importance rankings (i.e., the average effect, product moment correlations, and the correlation ratio), with 4 other methods yielding somewhat different rankings (i.e., one-way effects, main effects, mean elementary effects, and rank correlations) and partial derivative-based measures yielding completely different rankings. Changing the choice of input distributions or adding dependence between inputs also altered the rankings. Conclusions: The choice of uncertainty and sensitivity analysis may impact the insights from the analysis. Considerations and constraints in choosing an appropriate method include the desired type of insights (e.g., importance rankings of individual inputs, understanding of interactions and non-linearities in the model, and/or a sense of the model's robustness), the complexity of the model, and the number of uncertain inputs.