CONCURRENT ORAL SESSION G: THEORY AND METHODS IN HEALTH TECHNOLOGY ASSESSMENT

Tuesday, October 26, 2010: 10:15 AM
Grand Ballroom Centre (Sheraton Centre Toronto Hotel)

* Candidate for the Lee B. Lusted Student Prize Competition

Session Chairs:
Jean-Eric Tarride, PhD, MA and Susan Griffin, MSc, BSc
10:15 AM
* DO DIFFERENT METHODS OF MODELING STATIN EFFECTIVENESS INFLUENCE THE OPTIMAL DECISION?
Bob J.H. van Kempen, MSc1, Bart S. Ferket, MD1, Rogier L.G. Nijhuis, MD, PhD2, Sandra Spronk, PhD1 and M.G. Myriam Hunink, MD, PhD1, (1)Erasmus MC, Rotterdam, Netherlands, (2)ZGT Hengelo, Hengelo, Netherlands

Purpose: methods of modeling the effect of statins in simulation studies vary amongst papers that have been published. In this abstract, we will illustrate the impact of using different modeling methods on the optimal decision.

Method: a previously developed and validated Monte Carlo-Markov model based on the Rotterdam Study, a cohort study of 6871 individuals aged 55 years and older with 7 years follow-up, was used. Life courses of 3501 participants with complete risk profiles on statin treatment vs. no statin treatment were simulated using six health states (well, coronary artery disease (CAD), stroke, both CAD and stroke, and death). Transition probabilities were based on 5-year risks predicted by Cox regression equations, including (amongst others) total and HDL cholesterol as covariates. We used three different methods to model the effect of statins on the incidence of CAD: (1) statins lower total cholesterol levels and increase HDL, which through the covariates in the Cox regression equations leads to a lower incidence of CAD; (2) statins decrease the incidence of CAD directly through a relative risk reduction (RRR), assumed to be the same for each individual; (3) the RRR with statin therapy on the incidence of CAD is made proportional to the absolute reduction in LDL-cholesterol levels, for each individual. Each of the three statin modeling alternatives was compared to the no statin strategy.

Result: in the 3501 subjects (mean age 69 8.47, 39% men), lifeyears simulated for each of the three methods were: (1) 17.241, (2) 17.705 and (3) 17.709 years. At a willingness-to-pay of $ 50,000, net health benefits were (1) 9.67, (2) 9.87 and (3) 9.87. Figure 1 shows the probability that statin treatment is cost effective for each of the three methods, for varying willingness-to-pay thresholds.

Conclusion: the choice of modeling method of the effectiveness of a drug in simulation studies can potentially influence the optimal decision and the uncertainty associated with it.

10:30 AM
* A COMPARISON OF ALTERNATIVE STATISTICAL METHODS FOR COST-EFFECTIVENESS ANALYSES THAT USE DATA FROM CLUSTER RANDOMIZED TRIALS
Manuel Gomes, MSc, Edmond Ng, MSc, Richard Grieve, PhD and Richard Nixon, PhD, London School of Hygiene and Tropical Medicine, London, United Kingdom

Purpose: Cost-effectiveness analyses (CEA) may be undertaken alongside cluster randomized trials (CRTs) where the unit of randomization is the cluster (e.g. hospital) not the patient. This paper compares for the first time statistical methods for CEA that use data from CRTs: multilevel models (MLMs), Generalised Estimating Equations (GEEs), and a 2-stage non-parametric Bootstrap which re-samples clusters and then cases within clusters.

Method: Bivariate GEEs and the Bootstrap are relatively simple to implement compared to bivariate MLMs but rely on asymptotic assumptions, which with few clusters, may not be satisfied. We initially compare the methods with data from a large (1732 cases, 70 primary care clinics) CRT evaluating an intervention in primary care for reducing post-natal depression. We then undertake an extensive simulation study that compares the relative performance (bias, mean squared error and confidence interval coverage) of each method for estimating mean incremental net benefits (INB). Methods were initially tested under relatively ‘ideal circumstances’: normally distributed costs and QALYs, CRT with many (n=100) clusters and balanced cluster size (50 cases in each cluster). Then, we test the methods in scenarios with more realistic characteristics, based on our systematic literature review of 62 published studies: a)    Few (<10) balanced clusters b)    Many (40) imbalanced clusters c)    Few (<10) imbalanced clusters d)    As above but assuming data are gamma distributed

Result: The case study found similar mean INBs (λ=£20,000 per QALY) across methods but the 95% CI were much wider for the Bootstrap method (mean £135, 95% CI from -£60 to £549) versus the MLM (£58, -17 to 138), and GEE (£98, -24 to 220). The simulation study showed that under the ideal scenario all three methods performed relatively well. Under more realistic scenarios, the performance of all the methods was seriously affected in terms of MSE and coverage but not bias. When there were few (e.g. 6) balanced clusters, many imbalanced clusters, or few imbalanced clusters, the Bootstrap generally performed worst. For example, coverage levels were too conservative, and MSE was 25% higher than for the other methods. When cost data were simulated from a gamma distribution, the MLMs and GEEs continued to outperform the Bootstrap

Conclusion:  The 2-stage Bootstrap appears to perform worse than MLMs and GEEs across many circumstances faced by CEA that use CRTs.

10:45 AM
* AGE DIFFERENCES IN DECISIONAL STRATEGY INFLUENCE FRAMING EFFECTS IN MEDICAL DECISIONS
Erin L. Woodhead, PhD, VA Palo Alto GRECC, Palo Alto, CA, Elizabeth B. Lynch, PhD, Rush University Medical Center, Chicago, IL and Barry A. Edelstein, PhD, West Virginia University, Morgantown, WV

Purpose: The extent to which age impacts susceptibility to framing effects is unclear, particularly within the area of medical decision making. The current study used a think-aloud technique to determine whether the information used by participants in their decision making process varied by age and by frame.

Method: For Study 1, a think-aloud method was used to determine systematic patterns in type of information used by older and younger adults when making hypothetical treatment choices for lung cancer treatments. A within-subject design was used with 40 younger adults (M = 19.8, SD = 1.5) and 40 older adults (M = 77.4, SD = 5.9). Participants responded to both frames (survival and mortality). Logistic regression analyses were used to predict treatment choice (surgery or radiation). Qualitative data analysis software was used to analyze responses to the think-aloud portion of the study.

Result: Qualitative analysis revealed that two major decisional strategies were used by all participants: a strategy based on the presented data and one based on personal/vicarious experience. Older age predicted decreased use of a data strategy (O.R. = 0.97, p < 0.001). Frame, education, and treatment choice did not predict strategy. Frame interacted with decisional strategy to predict treatment choice (O.R. = 0.19, p < 0.001). Those using a data strategy were significantly more likely to demonstrate framing effects than those using an experience strategy. Age did not modify this effect. Age and education did not independently predict treatment choice. These results were replicated in Study 2, which employed a between-subjects design with 61 older adults (M = 76.8, SD = 7.1) and 63 younger adults (M = 18.6, SD = 1.9). In Study 2, age was the only significant predictor of strategy (O.R. = 0.96, p < 0.001), with older adults less likely to use a data strategy. Similar to Study 1, frame interacted with decisional strategy to predict treatment choice (O.R. = 0.32, p < 0.05).

Conclusion: These results suggest that age is not directly related to framing effects. Instead, age appears to influence adoption of a decisional strategy, which then impacts susceptibility to framing effects. Older adults may be less susceptible to framing effects due to their increased reliance on experience-based information processing.

11:00 AM
* COMPREHENSIVE EVIDENCE SYNTHESIS AND TRIAL-BASED COST-EFFECTIVENESS ANALYSIS: CAN THEY BE COMFORTABLE BED FELLOWS?
Mohsen Sadatsafavi, MD, MHSc1, Carlo A. Marra, PharmD, PhD1, Lawrence McCandless, PhD2 and Stirling Bryan, PhD1, (1)University of British Columbia, Vancouver, BC, Canada, (2)Simon Fraser University, Burnbaby, BC, Canada

Purpose: The contemporary methods for cost-effectiveness analysis (CEAs) conducted alongside randomized clinical trials (RCT) neglect the existing external evidence that resides outside of the realm of the clinical trial. We introduce the “vetted bootstrap”, a Bayesian framework for incorporation of external evidence into RCT-based CEAs.

Methods: First, the conventional bootstrap method for characterizing uncertainty in RCT-based CEAs is shown to be analogous to sampling from the posterior ‘distribution of the distribution’ of the RCT data. Such analogy can then be used to argue that external evidence imposes a prior distribution on the distribution of the data. This argument is evolved into an acceptance-rejection algorithm: the value of the statistics for which external evidence is available is calculated from the bootstrapped sample; and the likelihood of the statistics given the external evidence is used to probabilistically accept/reject that bootstrap sample (hence ‘vetting’ the bootstrap).

Results: Let X be the data of the RCT. Let θ be the statistics for which external evidence is available in the form of (multivariate) probability density function Pθ. The vetted bootstrap can be formulated as For i=1, ... , M, where M is the number of simulations:

  1. Generate Xb, a bootstrap sample of X with bootstrapping performed within each arm of the trial.
  2. Calculate θ’ = θ(Xb), the statistics for which external evidence is available, from this sample.
  3. Calculate ω=Pθ (θ'), the likelihood of the calculated statistics given its probability distribution.
  4. Accept the bootstrap sample Xb with a probability proportional to ω, otherwise ignore the sample and jump to (1).
  5. Calculate costs and effectiveness outcomes from Xb
The vectors of costs and effectiveness outcomes can be interpreted as random draws from their respective posterior distribution having observed the RCT data, with the external evidence as prior information. All downstream calculations remain unchanged.  As an alternative to the vetted bootstrap, the weights ω can directly be incorporated in all downstream calculation (“weighted bootstrap”).

Conclusion: Our method, providing the ability to incorporate evidence on any aspect of intervention within a non-parametric framework is operationally simple and removes an important drawback of RCT-based CEAs. The accompanying paper will include a step-by-step demonstration of the analogy of this method with parametric Bayesian inference and an exemplary application.

11:15 AM
* THE IMPORTANCE OF ADJUSTING FOR POTENTIAL CONFOUNDERS IN BAYESIAN HIERARCHICAL MODELS SYNTHESISING EVIDENCE FROM RANDOMISED AND NON-RANDOMISED STUDIES: A SIMULATION STUDY
C. Elizabeth McCarron, MA, MSc, Eleanor Pullenayegum, PhD, Lehana Thabane, PhD, Ron Goeree, MA and Jean-Eric Tarride, PhD, McMaster University, Hamilton, ON, Canada

Purpose: To assess the ability of a new Bayesian methodological approach to adjust for bias due to confounding when combining evidence from randomised and non-randomised studies.

Method: This study used Bayesian hierarchical modelling to combine evidence from randomised and non-randomised studies and compared alternative approaches in terms of their ability to accommodate imbalances in patient characteristics within studies that could confound the results.  In the new methodological approach, study estimates were adjusted for potential confounders using differences in patient characteristics (e.g., age) between study arms.  We compared the results of the Bayesian hierarchical model adjusted for differences in study arms with two other Bayesian approaches: 1) results adjusted for aggregate study values and 2) downweighting the potentially biased non-randomised studies.  A simulation study was used to examine the ability of the new and alternative models to account for imbalances and to assess the sensitivity of the results to changes in the relative number of studies of each type, the study sizes, the actual magnitude of the bias, and other sources of heterogeneity.   

Result: For all scenarios considered, the Bayesian hierarchical model adjusted for differences within studies gave results that were closer to the “truth” compared to the other models.

Conclusion: Covariate adjustment using differences in patient characteristics between study arms provides a systematic way of adjusting for bias due to confounding that is robust to changes in the relative number of studies of each type, the study sizes, the magnitude of the bias, and other sources of heterogeneity.  Where informed health care decision making requires the synthesis of evidence from randomised and non-randomised study designs such an approach could facilitate the use of all available evidence.

11:30 AM
SUPPLY CHAIN AND SYSTEM FACTORS THAT EXPLAIN VARIATIONS IN STATE VACCINATION COVERAGE LEVELS OF THE NOVEL H1N1 VACCINE
Carlo S. Davila Payan, MS1, Pascale Wortley, MD2 and Julie L. Swann, Ph.D.1, (1)a. Centers for Disease Control and Prevention, b. Georgia Institute of Technology, Atlanta, GA, (2)Centers for Disease Control and Prevention, Atlanta, GA

Purpose: In response to the 2009 H1N1 influenza pandemic, millions in the US were vaccinated, with state-specific coverage ranging from 8.7 to 34.4% for adults and 21.3 to 84.7% for children under 18; we study factors associated with higher vaccination coverage in a system where vaccine was in short supply.

Method: We used regression coupled with other statistical techniques to predict state-specific vaccination coverage of adults or children, using independent variables including demographics, and area (US Census Bureau); past seasonal adult or childhood vaccination coverage (Behavioral Risk Factor Surveillance System, National Immunization Survey); Public Health Emergency Response Funds (CDC); physician counts (US Bureau of Labor and Statistics); children’s health information (National Center for Health Statistics); H1N1-specific state and local data at the CDC (level of allocation control, type of allocation priority, participation of VFC providers, date of expansion beyond ACIP target groups, number of shipments, number of ship-to locations, lead time for allocation and ordering, peak week of Influenzalike illness activity); and degree of local autonomy of the public health system.

Result: The best models including only statistically significant variables explained over 70% of the variation in state-specific vaccination coverage of adults or children. We find that higher past seasonal influenza vaccination coverage of adults was associated with higher 2009 H1N1 vaccination coverage of adults and children, and accounted for 30% of the variation. In terms of supply chain factors, vaccination coverage changed positively with the number of shipments per location and negatively with the time to order allocated doses. For children, the proportion of the state’s population < 18 years was negatively associated with vaccination coverage.

Conclusion: Strengthening routine influenza vaccination programs may help improve vaccination coverage during a pandemic or other emergency. Repeated distribution to the same locations could represent underlying system differences related to efficiency or monitoring of usage to redistribute to providers who were vaccinating quickly. Ordering lag may be a function of system structure or of efficiency. This analysis suggests factors that public health agencies might consider monitoring in an emergency vaccination program during a supply shortage or aspects public health systems could consider when designing systems. In addition, accounting for the relative size of a State’s child population in allocating vaccine could  improve vaccination coverage of children, in a scenario where children are targeted.