F-6 USING SIMULATED DATA TO VALIDATE BAYESIAN MIXED TREATMENT COMPARISON META-ANALYSIS FOR DIFFERENT EVIDENCE NETWORK PATTERNS AND NUMBERS OF STUDIES

Thursday, October 18, 2012: 5:45 PM
Regency Ballroom D (Hyatt Regency)
Quantitative Methods and Theoretical Developments (MET)

Tania Wilkins, MS1, Daniel E. Jonas, MD, MPH1, Gerald Gartlehner, MD, MPH2 and Srikant Bangdiwala, PhD1, (1)University of North Carolina, Chapel Hill, NC, (2)Danube University, Vienna, Austria

Purpose: Bayesian mixed treatment comparison (MTC) meta-analysis is becoming a popular method for use in comparative effectiveness reviews when head-to-head data are limited. The aim of this research was to examine how findings of Bayesian MTC meta-analyses compare when there are different numbers of studies available and for different network patterns.

Method: We used simulated data to examine the Bayesian MTC method’s ability to produce valid results for two data scenarios. Each data scenario included four drugs and was constructed by random draws from a binomial distribution, with pre-determined response rates for each drug in the evidence network. Within each data scenario, we sampled a subset of studies to create analysis datasets with a varying number of studies, representing networks where there are one, two, three, five, or ten studies available for each drug comparison. These analysis datasets were created for four common types of network patterns: star, loop, one closed loop, and ladder. We compiled results from 40,000 analyses to generate a distribution of the probability of best treatment under each sample size and network pattern scenario. We compared these distributions to the pre-determined response rates to assess the validity of findings.

Result: Our simulations supported the validity of Bayesian MTC methods for star and ladder network patterns but raised some concerns about one closed loop, and possibly loop, network patterns. Simulations generally found similar results for scenarios when only one study was available for each comparison and those when more studies (two, three, five, or ten) were available. However, in certain cases, small but statistically significant changes occurred between results when only one study was available for each comparison and those when two or more studies were available.

Conclusion: Our findings raise some concerns about the validity of the results of Bayesian MTC methods for one closed loop, and possibly loop, network patterns. For star and ladder network patterns, our findings support validity. Analyses based on one study for each comparison were usually similar to those based on two or more studies, supporting the use of Bayesian MTC meta-analysis even when data are relatively sparse. Further research is needed to explore additional simulations to determine if our findings are generalizable and to better understand the validity of Bayesian MTC methods under different scenarios.