PS2-51 CHARACTERIZING MODEL UNCERTAINTY IN META-ANALYSIS USING A BAYESIAN METHOD

Monday, October 24, 2016
Bayshore Ballroom ABC, Lobby Level (Westin Bayshore Vancouver)
Poster Board # PS2-51

Chun-Po Fan, Ph.D., The Hospital of Sick Children, Toronto, ON, Canada, Jeffrey Hoch, PhD, Centre for Excellence in Economic Analysis Research, Li Ka Shing Knowledge Institute, St Michael's Hospital, Toronto, ON, Canada and George Tomlinson, PhD, Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada

Purpose: The usual practice to characterize model uncertainty is to present results of alternative models. Such a practice can be seen as a one-way sensitivity analysis, which at best provides a qualitative description of the impact of model uncertainty. We propose a Markov Chain Monte Carlo (MCMC) algorithm using Bayesian statistics to assess and quantify risks associated with statistical model choices in meta-analysis.

Method: Following the framework for addressing structural uncertainty in decision model by Jackson et al. (2011), model uncertainty is explicitly parameterized via variance components in a random-effect model using zero-inflated folded t-distributions. We develop an MCMC algorithm using Gibbs sampling with parameter expansion and consider a set of reference values for the parameters of the prior distributions. Two artificial data, one of which has heterogeneity twice as large as that in the other one, are chosen to demonstrate the agreement between the classical and Bayesian inference when model uncertainty is low; and the advantages of the Bayesian method when model uncertainty is considerable. In this demonstration, we show how one can calculate the Bayes Factor (BF) for model selection and to conduct a Bayesian model averaging (BMA) analysis using the algorithm. Furthermore, when model uncertainty is considerable, we compare the results of BMA with that of Bayesian fixed-effect (BFE) and Bayesian random-effect (BRE) analysis.

Result: In Data 1 in which the degree of heterogeneity is low (I2= 12.3%), the estimated BF is 3.8, suggesting substantial evidence in the lack of heterogeneity. The meta-analysis may be proceeded using a fixed-effect model. In Data 2 (I2= 51.7%), the evidence of heterogeneity is inclusive (BF= 3.0), suggesting considerable model uncertainty. In this case, the estimated between-treatment differences are -0.734 (95% CI= [-1.366, -0.207]), -0.714 (95% CI= [-1.072, -0.348]), and -0.792 (95% CI= [-1.841, 0.192]) using BMA, BFE and BRE analysis, respectively.

Conclusion: It is not always desirable to choose the `best` model in light of uncertainty; ignoring information from alternative models can lead to inaccurate results, thus erroneous conclusions. Not only does the proposed algorithm allow MCMC iterations to explore between alternative model specifications, therefore capturing the uncertainties both within and between models, but it can also be expressed in closed analytic forms, which allow for a quick implementation using the standard statistical software.