* Candidate for the Lee B. Lusted Student Prize Competition
Purpose: methods of modeling the effect of statins in simulation studies vary amongst papers that have been published. In this abstract, we will illustrate the impact of using different modeling methods on the optimal decision. Method: Result: Conclusion:
Purpose:
Cost-effectiveness analyses (CEA) may be undertaken alongside cluster randomized trials (CRTs) where the unit of randomization is the cluster (e.g. hospital) not the patient. This paper compares for the first time statistical methods for CEA that use data from CRTs: multilevel models (MLMs), Generalised Estimating Equations (GEEs), and a 2-stage non-parametric Bootstrap which re-samples clusters and then cases within clusters.Method:
Bivariate GEEs and the Bootstrap are relatively simple to implement compared to bivariate MLMs but rely on asymptotic assumptions, which with few clusters, may not be satisfied. We initially compare the methods with data from a large (1732 cases, 70 primary care clinics) CRT evaluating an intervention in primary care for reducing post-natal depression. We then undertake an extensive simulation study that compares the relative performance (bias, mean squared error and confidence interval coverage) of each method for estimating mean incremental net benefits (INB). Methods were initially tested under relatively ‘ideal circumstances’: normally distributed costs and QALYs, CRT with many (n=100) clusters and balanced cluster size (50 cases in each cluster). Then, we test the methods in scenarios with more realistic characteristics, based on our systematic literature review of 62 published studies: a) Few (<10) balanced clusters b) Many (40) imbalanced clusters c) Few (<10) imbalanced clusters d) As above but assuming data are gamma distributedResult:
The case study found similar mean INBs (λ=£20,000 per QALY) across methods but the 95% CI were much wider for the Bootstrap method (mean £135, 95% CI from -£60 to £549) versus the MLM (£58, -17 to 138), and GEE (£98, -24 to 220). The simulation study showed that under the ideal scenario all three methods performed relatively well. Under more realistic scenarios, the performance of all the methods was seriously affected in terms of MSE and coverage but not bias. When there were few (e.g. 6) balanced clusters, many imbalanced clusters, or few imbalanced clusters, the Bootstrap generally performed worst. For example, coverage levels were too conservative, and MSE was 25% higher than for the other methods. When cost data were simulated from a gamma distribution, the MLMs and GEEs continued to outperform the BootstrapConclusion:
The 2-stage Bootstrap appears to perform worse than MLMs and GEEs across many circumstances faced by CEA that use CRTs.
Purpose: The extent to which age impacts susceptibility to framing effects is unclear, particularly within the area of medical decision making. The current study used a think-aloud technique to determine whether the information used by participants in their decision making process varied by age and by frame.
Method: For Study 1, a think-aloud method was used to determine systematic patterns in type of information used by older and younger adults when making hypothetical treatment choices for lung cancer treatments. A within-subject design was used with 40 younger adults (M = 19.8, SD = 1.5) and 40 older adults (M = 77.4, SD = 5.9). Participants responded to both frames (survival and mortality). Logistic regression analyses were used to predict treatment choice (surgery or radiation). Qualitative data analysis software was used to analyze responses to the think-aloud portion of the study.
Result: Qualitative analysis revealed that two major decisional strategies were used by all participants: a strategy based on the presented data and one based on personal/vicarious experience. Older age predicted decreased use of a data strategy (O.R. = 0.97, p < 0.001). Frame, education, and treatment choice did not predict strategy. Frame interacted with decisional strategy to predict treatment choice (O.R. = 0.19, p < 0.001). Those using a data strategy were significantly more likely to demonstrate framing effects than those using an experience strategy. Age did not modify this effect. Age and education did not independently predict treatment choice. These results were replicated in Study 2, which employed a between-subjects design with 61 older adults (M = 76.8, SD = 7.1) and 63 younger adults (M = 18.6, SD = 1.9). In Study 2, age was the only significant predictor of strategy (O.R. = 0.96, p < 0.001), with older adults less likely to use a data strategy. Similar to Study 1, frame interacted with decisional strategy to predict treatment choice (O.R. = 0.32, p < 0.05).
Conclusion: These results suggest that age is not directly related to framing effects. Instead, age appears to influence adoption of a decisional strategy, which then impacts susceptibility to framing effects. Older adults may be less susceptible to framing effects due to their increased reliance on experience-based information processing.
Purpose: The contemporary methods for cost-effectiveness analysis (CEAs) conducted alongside randomized clinical trials (RCT) neglect the existing external evidence that resides outside of the realm of the clinical trial. We introduce the “vetted bootstrap”, a Bayesian framework for incorporation of external evidence into RCT-based CEAs.
Methods: First, the conventional bootstrap method for characterizing uncertainty in RCT-based CEAs is shown to be analogous to sampling from the posterior ‘distribution of the distribution’ of the RCT data. Such analogy can then be used to argue that external evidence imposes a prior distribution on the distribution of the data. This argument is evolved into an acceptance-rejection algorithm: the value of the statistics for which external evidence is available is calculated from the bootstrapped sample; and the likelihood of the statistics given the external evidence is used to probabilistically accept/reject that bootstrap sample (hence ‘vetting’ the bootstrap).
Results: Let X be the data of the RCT. Let θ be the statistics for which external evidence is available in the form of (multivariate) probability density function Pθ. The vetted bootstrap can be formulated as For i=1, ... , M, where M is the number of simulations:
- Generate Xb, a bootstrap sample of X with bootstrapping performed within each arm of the trial.
- Calculate θ’ = θ(Xb), the statistics for which external evidence is available, from this sample.
- Calculate ω=Pθ (θ'), the likelihood of the calculated statistics given its probability distribution.
- Accept the bootstrap sample Xb with a probability proportional to ω, otherwise ignore the sample and jump to (1).
- Calculate costs and effectiveness outcomes from Xb
Conclusion: Our method, providing the ability to incorporate evidence on any aspect of intervention within a non-parametric framework is operationally simple and removes an important drawback of RCT-based CEAs. The accompanying paper will include a step-by-step demonstration of the analogy of this method with parametric Bayesian inference and an exemplary application.
Purpose: To assess the ability of a new Bayesian methodological approach to adjust for bias due to confounding when combining evidence from randomised and non-randomised studies.
Method: This study used Bayesian hierarchical modelling to combine evidence from randomised and non-randomised studies and compared alternative approaches in terms of their ability to accommodate imbalances in patient characteristics within studies that could confound the results. In the new methodological approach, study estimates were adjusted for potential confounders using differences in patient characteristics (e.g., age) between study arms. We compared the results of the Bayesian hierarchical model adjusted for differences in study arms with two other Bayesian approaches: 1) results adjusted for aggregate study values and 2) downweighting the potentially biased non-randomised studies. A simulation study was used to examine the ability of the new and alternative models to account for imbalances and to assess the sensitivity of the results to changes in the relative number of studies of each type, the study sizes, the actual magnitude of the bias, and other sources of heterogeneity.
Result: For all scenarios considered, the Bayesian hierarchical model adjusted for differences within studies gave results that were closer to the “truth” compared to the other models.
Conclusion: Covariate adjustment using differences in patient characteristics between study arms provides a systematic way of adjusting for bias due to confounding that is robust to changes in the relative number of studies of each type, the study sizes, the magnitude of the bias, and other sources of heterogeneity. Where informed health care decision making requires the synthesis of evidence from randomised and non-randomised study designs such an approach could facilitate the use of all available evidence.
Purpose: In response to the 2009 H1N1 influenza pandemic, millions in the US were vaccinated, with state-specific coverage ranging from 8.7 to 34.4% for adults and 21.3 to 84.7% for children under 18; we study factors associated with higher vaccination coverage in a system where vaccine was in short supply.
Method: We used regression coupled with other statistical techniques to predict state-specific vaccination coverage of adults or children, using independent variables including demographics, and area (US Census Bureau); past seasonal adult or childhood vaccination coverage (Behavioral Risk Factor Surveillance System, National Immunization Survey); Public Health Emergency Response Funds (CDC); physician counts (US Bureau of Labor and Statistics); children’s health information (National Center for Health Statistics); H1N1-specific state and local data at the CDC (level of allocation control, type of allocation priority, participation of VFC providers, date of expansion beyond ACIP target groups, number of shipments, number of ship-to locations, lead time for allocation and ordering, peak week of Influenzalike illness activity); and degree of local autonomy of the public health system.
Result: The best models including only statistically significant variables explained over 70% of the variation in state-specific vaccination coverage of adults or children. We find that higher past seasonal influenza vaccination coverage of adults was associated with higher 2009 H1N1 vaccination coverage of adults and children, and accounted for 30% of the variation. In terms of supply chain factors, vaccination coverage changed positively with the number of shipments per location and negatively with the time to order allocated doses. For children, the proportion of the state’s population < 18 years was negatively associated with vaccination coverage.
Conclusion: Strengthening routine influenza vaccination programs may help improve vaccination coverage during a pandemic or other emergency. Repeated distribution to the same locations could represent underlying system differences related to efficiency or monitoring of usage to redistribute to providers who were vaccinating quickly. Ordering lag may be a function of system structure or of efficiency. This analysis suggests factors that public health agencies might consider monitoring in an emergency vaccination program during a supply shortage or aspects public health systems could consider when designing systems. In addition, accounting for the relative size of a State’s child population in allocating vaccine could improve vaccination coverage of children, in a scenario where children are targeted.