I LUSTED FINALIST ABSTRACTS: QUANTITATIVE METHODS

Friday, October 19, 2012: 1:00 PM-2:30 PM
Regency Ballroom D (Hyatt Regency)

Session Chairs:
Karen M. Kuntz, ScD and Y. Claire Wang, MD, ScD
1:00 PM
I-1
(MET)
Michael A. Vedomske, M.S. and Donald E. Brown, Ph.D, University of Virginia, Charlottesville, VA

Purpose: Automated evidence-based methods exploiting widely available electronic health records (EHR) for understanding congestive heart failure (CHF) patient treatment histories require methods robust to treatment variability. The goal of this paper is to test the optimal matching algorithm for finding high "quality" representatives of CHF patient groups.

Method: Optimal Matching (OM) originates in genomics by matching like-sequences and was generalized by social scientists to generic sequences. The algorithm runs in R [1] by the package TraMineR [2] with a fixed subset of 100 patients from UVA's Clinical Database Repository each with sequences of length 19 (median for the original dataset). The input parameters to the algorithm are the substitution and insertion/deletion costs. Patient groups are formed by hierarchical clustering using percent overlap of procedures between patients with the Dunn index determining the number of groups. Representative sequences are the patient treatment histories, which best represent the remaining cluster members in terms of "quality" as mathematically defined in TraMineR documentation. The representatives reveal the procedural makeup of the cluster. Such insight is useful in automated evidence-based approaches to understanding CHF as it shows decision makers how the health system has responded to patients of similar treatment. To obtain the best input parameters a Kriging response surface of 100 grid points (cost combinations) was created and plotted.

Result: The optimal input combination was (Sub, InDel) = (0.722, 0.658) with corresponding quality 0.503 and is shown in the figure. Kriging output suggests that costs and quality are nonlinear and non-smooth in relation. Small input changes result in non-smooth output changes (see figure).

Conclusions: Automated methods of analysis require predictable outputs in order to be repeatable and reliable. As the response surface showed significant non-smoothness, the "quality" measure from OM must be better explored in relation to EHR data in order to exploit this algorithm's desirable properties and rich research body in other fields. Future research is needed to define the conditions and properties for which EHRs may be used with OM to be able to exploit its properties for evidence-based methods of inquiry.  Research supported by NSF Graduate Research Fellowship. [1] R. D. C. Team, "R," 2011. [2] A. Gabadinho et al., "Analyzing and visualizing state sequences in r with TraMineR," 2011.

1:15 PM
I-2
(MET)
M. Reza Skandari, MS, Steven M. Shechter, PhD and Nadia Zalunardo, MD, SM, FRCP(C), University of British Columbia, Vancouver, BC, Canada

Purpose: To develop data-driven, evidence-based guidelines for deciding when to initiate arteriovenous fistula (AVF) creation in individuals with progressive chronic kidney disease (CKD). 

Method: We developed a Monte Carlo simulation model to evaluate existing and alternative guidelines to determine optimal timing of referral for AVF creation with respect to quality-adjusted life expectancy, proportion of CKD patients starting HD with an AVF or central venous catheter (CVC), and proportion of patients who have a functional AVF that goes unused. Based on estimated glomerular filtration rate (eGFR) measurements for a cohort of 860 CKD patients, we fit patient-specific regression models so as to simulate eGFR values over time.  We combined primary data on AVF referral-until-surgery time and literature estimates of fistula failure rates to model if or when an AVF can be used to support HD.  We used health state utility estimates from the literature to evaluate quality-adjusted life expectancy.

Result: Guidelines that recommend AVF referral within a 9-12 month window of anticipated HD start time appear optimal, improving upon eGFR threshold-based guidelines by between 5.6 to 22.3 quality-adjusted life days depending on which threshold is considered.  A policy that waits until HD is needed before referring patients for AVF yields an average decrease of 31.9 quality-adjusted life days per patient relative to the optimal policy.  A 12 month preparation window would result in 8.5% of 50-60 year olds having a wasted functional AVF, with the percentage more than doubling to 18.4% for patients 80-90 years old.

Conclusion: Our results consistently demonstrate that guidelines based on initiating AVF within a time window of the anticipated dialysis start date outperform guidelines based on eGFR falling below some threshold.  There is a higher chance the elderly will have unused AVFs, and therefore separate guidelines might be considered for that subpopulation.

1:30 PM
I-3
(MET)
Vishal Ahuja, B.E., M.A.Sc and John Birge, A.B., M.S., Ph.D., University of Chicago, Chicago, IL
  

Purpose : Traditional clinical trials are randomized, i.e., allocation of patients to treatments is purely random (e.g. fair coin-toss) and the goal is to maximize learning about treatment efficacy. Adaptive trial designs, on the other hand, allow clinicians to learn about drug effectiveness during the course of the trial.  An ideal adaptive design is one where patients are treated as effectively as possible without sacrificing any learning. We propose such an adaptive design, one that uses forward-looking algorithms to fully exploit learning from multiple patients simultaneously.    

Methods : The class of problems involving adaptive designs has its roots in the multi-armed bandit problem that exemplifies the tradeoff between the cost of gathering information and the benefit of exploiting information already gathered. The setup is in the form of a Markov Decision Process (MDP) with one major difference: in our setup, the transition probabilities are unknown. Instead, we assume a parametric distribution on the transition probabilities prior to the trial, where the parameters of the assumed distribution represent our beliefs on the outcome probabilities for each treatment. As the trial progresses, we update the beliefs dynamically in a Bayesian fashion using information observed during the trial (see transition diagram below).  We assume that patients are homogenous and patient responses are available immediately.   

Results : The Jointly Optimal design that we propose yields better patient outcomes compared to existing implementable adaptive designs. This is because our design incorporates previous responses of all patients when making decisions and naturally allows for mixtures of treatments without imposing constraints artificially. Under the scenarios we consider, our design provides an improvement, measured as an increase in expected proportion of successes, of up to 8.64% compared to the best existing adaptive design. Subsequently, we validate our design in a real setting by implementing it ex-post on a recently conducted stent study, a two-armed, randomized trial. We find that implementing our adaptive design would cause the total number of patient failures to decrease by 15 or over 32%, in expectation, where a failure is defined as 30-day rate of stroke or death.   

Conclusions : Adaptive designs that learn from multiple patients, such as our proposed design, result in improved patient outcomes compared to randomized designs or existing adaptive designs. We quantify this improvement under various scenarios.

1:45 PM
I-4
(MET)
Eva Enns, MS1, Suzann Pershing, M.D.1, Yang Wang, MS1 and Jeremy D. Goldhaber-Fiebert, PhD2, (1)Stanford University, Stanford, CA, (2)Centers for Health Policy & Primary Care and Outcomes Research, Stanford University, Stanford, CA

Purpose: Simulation modelers require transition probabilities between disease states that are often not directly observed. While data may be collected on timescales of years or even decades, underlying disease dynamics evolve at much shorter timescales. Accurate transition probability estimates are difficult to obtain, and may require solving complex mathematical optimization problems.

Method: We consider a cohort model over time. Disease dynamics evolve according to xt+1 = Axt, where xt describes the proportion of the population in a finite number of categories, and A is the transition matrix. The transition probabilities must be estimated from cross-sectional samples of the state of the cohort at a subset of time points. This gives rise to equations: xt+L = ALxt , where L is the interval between samples.  In the general case, samples could be unevenly-spaced and A could vary across different sample intervals.  Our goal is to find an A that best fits the observations, given the observations’ precision and assumptions about disease progression and regression. We develop an iterative algorithm using a sequence of simple optimizations. We select arbitrary  initial values for A and estimate the cohort states x0, …, xt+L (including values at unobserved time points) that minimize the sum of residuals ∑t=0,…,L-1 (xt+1 - Axt )2 , subject to constraints. Then, we fix the cohort states x0, …, xt+L to our estimated values, and solve for the transition matrix A that again minimizes the residuals. We repeat this procedure until the estimated probabilities in A converge.

Result: We apply our method to a previously-developed model of progressive, diabetic macular edema to infer monthly transition probabilities between visual acuity levels from cross-sectional data measured at 5-year intervals. We compare our iterative approach to a traditional Nelder-Mead algorithm, running both algorithms from 1,000 random starting locations. While Nelder-Mead identified a slightly better fit overall than the iterative algorithm, the iterative algorithm achieved a better mean fit with lower variability, identifying a solution within 15% of the best-fit residual for over 90% of starting points; Nelder-Mead only did so for 8% of starting locations.

Conclusion: A fundamental problem faced across a range of modeling applications is how to consistently infer transition probabilities from multiple cross-sectional prevalence estimates. We describe an iterative algorithm that produces accurate and consistent solutions.

2:00 PM
I-5
(MET)
Alireza Sabouri, Steven M. Shechter, PhD and Tim Huh, PhD, University of British Columbia, Vancouver, BC, Canada

Purpose: Patients on the kidney transplant waiting list are at significant risk of developing cardiovascular disease (CVD) during the time they wait for a kidney offer, and transplant centers want to avoid performing risky transplant operations on such patients. We develop data-driven, evidence-based CVD screening guidelines that minimize this risk. 

Methods: To develop effective screening guidelines, we use an optimization model and a discrete-event simulation program to determine the optimal times to screen a particular patient for possible development of CVD, taking into account the tradeoffs between more frequent screenings (incurring high resource costs and patient inconvenience) and less frequent ones (increasing the risk a donated kidney goes to a patient with CVD). 

Results: In comparing our analytically derived optimal policies with those currently used by the British Columbia Transplant Society, we find that by scheduling few screening opportunities at the optimal times, we can not only improve the transplant outcomes, but also utilize the screening resources more efficiently. In particular, the current policy suggests annual screening of the high risk patients. Under this policy, the probability of performing a transplant on a patient with CVD is 0.077 and the expected number of screenings performed is 1.56. On the other hand, the optimal scheduling of 3 screening times reduces the probability of adverse event by 0.035 for a slightly smaller expected number of screenings. Furthermore, we show that fixed interval screening policies, which are common in practice, are dominated by the efficient frontier curve (for likelihood of successful transplant vs. average number of screenings performed) generated by our optimal screening policies. Our results also suggest that waiting time of the patients on the waiting list is a more important factor in determining the optimal screening times than the CVD risk.

Conclusions: Our results demonstrate that efficiencies can be achieved in both transplant outcomes and resource usage by adopting the variable interval screening policies obtained from our optimization model. Furthermore, our results indicate that factors which affect the waiting time of the patients (e.g., rank on the waiting list, blood type, etc.) must be considered in designing the screening guidelines.

2:15 PM
I-6
(MET)
Lauren E. Cipriano, MS, Stanford University, Stanford, CA and Thomas A. Weber, PhD, Ecole Polytechnique Federale de Lausanne, Lausanne, Switzerland

Purpose: Standard methods of health-policy evaluation assume that future cohorts are similar to the modeled cohort. Moreover, standard value-of-information (VOI) calculations regard per-person VOI constant across cohorts and do not consider the option to collect information in the future. We show that when model parameters vary across cohorts it may be optimal to delay information collection. We provide a framework for evaluating the marginal value of sample information and thus the optimal timing and scale of information acquisition.

Methods: The value of a disease screening program is evaluated for future cohorts. Disease prevalence for future cohorts is (imperfectly) observable by collecting costly sample information and otherwise evolves randomly with drift across periods. We formulate a Markov decision problem with linear stochastic dynamics and a hidden state. The incremental net monetary benefit is assumed linear in the uncertain parameter which, itself, is decreasing in expectation. Using a dynamic-programming approach it is possible to determine decision rules for optimal continuation and information acquisition policies that govern the dynamic implementation (and eventual discontinuation) of the health program.

Results:   The optimal policy is characterized by a map from the state space to actions, featuring three regions (Figure).  In region III, the expected prevalence is above the upper threshold and the optimal policy is to continue the disease-screening intervention without information acquisition.  In region I, the expected prevalence is below the lower threshold and it is optimal to terminate the disease-screening program.  Between the two thresholds, it is optimal to continue the disease-screening program and collect information about the current cohort's disease prevalence. Further, for any initial belief about cohort prevalence, we can numerically calculate the expected value of sample information given the possibility of collecting information in the future. The results of this analysis are provided in a ready-to-use format for decision makers so as to quickly determine the currently optimal policy, the length its implementation horizon, and the subsequent action (which then leads to a state update).

Conclusions: When cohort or intervention characteristics vary over time, the recurrent intervention and information-collection decisions can be determined by solving a stochastic dynamic program.  Evaluating VOI without considering the possibility of collecting information in future periods, when the information may be more valuable, may result in sub-optimal actions.