* Finalists for the Lee B. Lusted Student Prize
Purpose: The cost-effectiveness of specific interventions to control epidemics may change dramatically over time. School closure, for example, may have an important role in reducing disease transmission during early stages of an influenza outbreak, but its benefit would diminish as the pool of susceptible individuals is depleted (either because of deployment of an immunizing vaccine or unabated progression of the epidemic). Our goal is to develop and evaluate a decision tool to adaptively guide the cost-effective use of control interventions throughout the epidemic.
Method: We develop a dynamic decision model to optimize control policies that utilize the observed, but likely imperfect, measures of the epidemic to inform decisions over time. To illustrate the potential uses of this model, we consider a question that would be asked during novel influenza epidemics: When should schools be closed and re-opened? We apply our decision model to optimize alternative school-closure policies that differ in the manner they use epidemiological data to recommend whether schools should be open or closed each week. To evaluate the comparative performance of these policies, we built a simulation model for influenza outbreaks and calibrated it using data from the 2009 U.S. H1N1 influenza pandemic. To optimize a policy, we use net health benefit as the objective function.
Result: Although Policies A-C rely on the same single source epidemic data (i.e. the weekly number of hospitalized cases) to guide decisions, different uses of this data result in different health and financial outcomes (see Figure). Policy A, which only uses the time elapsed since the first hospitalized case to inform decisions (e.g. close schools during weeks 10-14 after observing the first hospitalized case), is dominated by Policies B and C each of which allow more adaptive responses to the accumulating observations. Policy C, by leveraging two measures related to hospitalizations as well as information about previous school closings, better identifies the current epidemic state, and dominates Policies A and B.
Conclusion: While simple control policies (such as Policies A and B) are easy to optimize using mathematical or simulation models, policies that utilize observations in a more intelligent way (such as Policy C) can yield better outcomes. Our modeling approach provides a flexible framework for generating adaptive policies that leverage real-time observations to inform decision making.
Purpose: To individualize the application of genetic testing by developing and validating risk prediction models that identify patients most likely to benefit from genotype-tailored therapies.
Methods: We developed a model to select patients for genotyping based on predicted use of genetic variants in future prescribing episodes and examined its performance in an independent validation set. We retrieved electronic medical records (EMR) from a retrospective cohort of patients at Vanderbilt University Medical Center who met ‘medical home' criteria between 6/30/2005-5/31/2010. Patients were included if they were ≥ 18 years of age and had completed three clinical visits in a two year period with a primary care provider or specialist. The primary outcome was the two-year risk of an incident prescription of clopidogrel, a statin, or warfarin, all of which can be tailored to pharmacogenomic variants.
Using demographics and clinical data, we developed a Cox regression model to estimate risk for being prescribed a target medication within two years of the medical home date. A modified version of the model was implemented within an EMR-based clinical decision support tool, and risk was calculated prior to every planned patient encounter. A high-risk, subset of patients (>28.5% risk in two years) was identified and sampled for preemptive genotyping between 6/1/2010 and 3/31/2013. We summarize the extent to which the genotyped patients in this validation set were enriched with those eventually prescribed a target medication compared to other, simpler sampling approaches (random sampling; high age and body mass index (BMI) based sampling).
Results: The model exhibited little to no overfitting for discriminating between patients prescribed a target medication and others in the original training dataset. Applying encounter inclusion criteria, 18,950 patients qualified for the validation set in clinics where the decision support tool was deployed. Of the 1673 patients who were selected for preemptive genotyping by the model, we calculated that 48% (95% confidence interval: 44%-52%) were prescribed a target medication, while two identically sized cohorts defined by random sampling or age and BMI based sampling yielded target prescription rates of 19% (95% CI: 16%-22%) and 34% (95% CI: 30%-37%) respectively.
Conclusions: Model-driven patient selection within an EMR increases the efficiency of large-scale genotyping and use of genetic information to direct prescribing.
Method: Calibration of natural history parameters was carried out in two steps in collaboration with the authors of the Erasmus MISCAN model. First, we calibrated incidence of latent cancer to match age-specific prevalence data from autopsy studies. Second, we recalibrated all consecutive progression and detection parameters to match observed data of the Rotterdam cancer registry and the European trial (ERSPC). The calibration was performed using a deterministic state-transition model programmed in the statistical software R. As in the PCOP model, prostate cancer is separated into nine stage- and grade-specific health states. Transitions between cancer states were allowed to vary with dwelling time. Latent cancer can be detected by clinical symptoms or screening. Model parameters were fitted using the ‘nlminb’ optimization routine in R.
Result: In total, we calibrated 46 parameters. Deviations from observed data are modest and results robust against variations of the calibration approach. To compensate for the increase of the latent prevalence pool, additional parameters, allowing for an interruption of disease progression in the stage- and grade-specific health states had to be introduced to reach a good fit of the observed incidence data. Fitting of age-specific detection rates observed in the European trail required implementation of an effect modifier allowing for lower screening sensitivities in elder patients. Comparison of the recalibrated model with our previous model indicates that increasing the latent prevalence pool to match autopsy data result in considerable increase of overdiagnosis and lower screening sensitivity, which finally decreases the benefit-harm ratio of screening.
Conclusion: Calibration can help to better understand latent disease processes and to derive new hypotheses.
The purpose of the study is to test the hypothesis that contact dynamics and network structures are important to closely predict the spread of sexually transmitted diseases (STDs).
The Network, Norms and HIV/STI Risk Among Youth (NNAHRAY) project included both a relationship survey and laboratory testing for STDs among 465 interviewees, residing in Bushwick, New York between 2002 and 2004. Testing included human immunodeficiency virus (HIV) and herpes simplex virus-2 (HSV2). We created a dynamic network model to replicate the risky behavior contact network structures reported in NNAHRAY. Combining stochastic disease models of HIV and HSV2 with our dynamic network model, we simulated the spread of both diseases from 1990 to 2002 in a synthesized population based on the census information collected in NNAHRAY data. We proposed a network model fitting framework to adjust for the biased network sampling issues inherent in the NNAHRAY data, which included chain referral methods. Using the same set of parameters estimated from the NNAHRAY data and the published literature, we compared our dynamic model with a range of static network models in fitting the contact network structures in NNAHRAY and the 12 year HIV prevalence in New York City (NYC) in the literature.
After fitting our dynamic network model to the proposed contact network structure metrics measured, allowing 10% deviation for 11 metrics and 50% deviation for 2 metrics of high dimensions, we also fit our disease spread model to produce HIV (9%) and HSV2 (45%) prevalence, which were close to those reported in NNAHRAY (9% and 48%). Without the proposed network model fitting framework, neither the static network models nor our dynamic network model could reproduce all the proposed network structure metrics in the allowed range at the same time. Only our dynamic network model succeeded in doing so after the network sampling process was incorporated in the model fitting. Our combined stochastic disease model also provided the most accurate predictions of HIV prevalence relative to predictions from disease models based on static networks.
Our work supported the hypothesis that considering the underlying contact dynamics as well as network structures was important for making optimal disease prevalence predictions, demonstrating the need to model the data sampling process when validating against real-world data.
Our objective was to incorporate VRE/MRSA colonization and acuity into a discrete event simulation model of bed allocation and patient flow, which requires matching patients to beds on acuity and service. In semi-privates rooms, additional matching is required on gender and patient history of colonization with VRE/MRSA. Like-gender, like-colonized patients can be cohorted.
We developed a discrete event model, using input data from a repository of 104,725 admissions between 2010─2011, including clinical and demographic data. Probability distributions of hourly time-varying acuity states were created over the admission duration based on patient movement time-stamps. The data were reduced to populate the model with patients arriving hourly with characteristics drawn from a joint distribution of acuity, service, gender, and VRE/MRSA colonization status. At each 1h time step, the model drew samples from the distribution; if a change in colonization or acuity resulted in a patient-bed mismatch, the patient was re-allocated when an appropriate bed became available. We examined mean length of stay (LOS, d) and occupancy to ensure accurate capture of patient flow, colonization status, and relative proportions of acuity transitions.
In a simulated hospital with 758 beds (67% semi-private), the model matched patients to beds based on observed characteristics: acuity (12% observation; 68% general; 12% step-down and 8% intensive care unit), service (53% medicine; 47% surgery) , gender (49% female) and colonization status (90.3% non-colonized, 4.1% VRE, 3.6% MRSA, 2.0% VRE/MRSA). Most patients remained in their current acuity, however, and the most common transition was to General Care Units (Figure). Mean LOS (± SD) in the repository was 4.7 ± 5.5d; the model-estimated LOS after five years and >240,000 admissions was 4.9 ± 5.1d. We achieved a model-estimated occupancy of 80.3 ± 4.3% compared to hospital-reported 82.9 ± 1.7%. The proportion of occupied beds, by observed colonization was: 89.8% non-colonized, 4.8% VRE, 3.2% MRSA and 2.2% VRE/MRSA.
Patient flow is influenced by arrivals and discharges of patients as well as within-hospital transfers, which are driven by observed VRE/MRSA colonization status and acuity. Incorporation of both in models of patient flow will increase their utility to clinicians and policy makers.
In cost-effectiveness analyses (CEA) alongside randomized controlled trials (RCTs) that have non-compliance with the treatment assigned, studies typically report Intention-to-treat (ITT) and Per Protocol (PP) analyses. However, a PP analysis is likely to provide a biased estimate of the complier average causal effect (CACE). We propose an instrumental variable (IV) approach for providing unbiased CACEs in CEA.
Instrumental variable (IV) estimation can provide unbiased CACEs if the standard identification assumptions are satisfied, but cost-effectiveness data raises important additional challenges. In particular, IV methods need recognise the correlation between cost and health outcomes. We consider IV approaches estimated by two-stage least squares (2SLS) regression; the first stage regression estimates the effect of the treatment assigned on that received, and the second stage regression includes the predicted treatment received as an independent variable in the outcome models. Initially we estimate incremental costs and outcomes by separate univariate 2SLS regressions assuming the endpoints are uncorrelated. By contrast, our proposed method is a bivariate 2SLS regression, which recognises the correlation between costs and outcomes. We fit this model by maximum likelihood with standard errors estimated by a non-parametric bootstrap procedure that resamples pairs of costs and effects to acknowledge the correlation, and also recognises the uncertainty in both stages of the estimation procedure.
We illustrate these approaches in a reanalysis of the REFLUX study, where about one third of patients randomized to surgery received medical management. We also considered the relative performance of the methods in a simulation study, where costs and outcomes are assumed Normally distributed, correlated (-0.40), and there is 30% non-compliance. We report the bias and confidence interval coverage in estimating Incremental Net Benefits (INBs).
In the REFLUX study, the INBs [SE] estimated according to ITT and PP were £4267 (4379)and £3063 . The IV approaches reported estimated INBs of £5817  (univariate) and £5723  (joint estimation method). The simulation found that both IV methods provided unbiased INB estimates, and coverage levels that were close to the nominal level for the univariate approach (94.6%), but too conservative for the joint approach (97.8%).
This bivariate approach can provide unbiased estimates of the cost-effectiveness of the treatment received. The current bootstrap implementation overestimates the variance, and may require a shrinkage correction.