L LUSTED FINALIST ABSTRACTS D: HEALTH SERVICES & POLICY RESEARCH

Friday, October 19, 2012: 4:00 PM-5:30 PM
Regency Ballroom D (Hyatt Regency)

Session Chairs:
David O. Meltzer, MD, PhD and Dominick esposito
4:00 PM
L-1
(HSP)
Sabina S. Alistar, MS, Philip M. Grant, MD and Eran Bendavid, MD, MS, Stanford University, Stanford, CA

Purpose: Recent evidence shows both antiretroviral therapy (ART) and oral pre-exposure prophylaxis (PrEP) are effective in reducing HIV transmission in heterosexual adults in resource-limited settings. The epidemiologic impact and cost-effectiveness of combined prevention approaches remain unclear.

Method: We develop a dynamic mathematical model of the adult South African HIV epidemic. We consider 3 disease stages: early (CD4 > 350 cells/µL), late (200-350 cells/µL) and advanced (< 200 cells/µL). Infectiousness is based on disease stage, number of sexual partnerships, ART, and PrEP. We assume ART reduces HIV transmission by 95% and PrEP by 60%.  We model 2 ART strategies: scaling up access for those with CD4 counts ≤ 350 cells/µL (Guidelines) and for all identified HIV-infected individuals (Universal).  PrEP strategies include use in the general population (General) and in high-risk individuals (Focused). We consider strategies where ART, PrEP, or both are scaled up to recruit 25%, 50%, 75% or 100% of remaining eligible individuals yearly. We assume annual costs of $150 for ART and $80 for PrEP. We measure infections averted, quality-adjusted life-years (QALY) gained and incremental cost-effectiveness ratios over 20 years.

Result: Scaling up ART to 50% of eligible individuals in South Africa averts 1,513,000 infections over 20 years using the Guidelines and 3,591,000 infections using a Universal strategy. Universal ART is more cost-effective than Guidelines ($310-$340/QALY gained compared with status quo). Expanding Guidelines ART to recruit 50% of those eligible yearly costs $410/QALY gained versus status quo, and this estimate is stable with higher coverage rates. General PrEP is costly and provides relatively small benefits beyond those of ART scale-up. Cost-effectiveness of General PrEP becomes less favorable when ART is given more widely ($1,050-$2,800/QALY gained). However, Focused PrEP is cost saving compared with the status quo and when added to any ART strategies except 75% or 100% Universal, where it is highly cost-effective.

Conclusion: Expanded ART coverage to individuals in early disease stages is more cost-effective than expansion of treatment per current guidelines. PrEP can be cost-saving if it can be delivered to individuals at increased risk of infection.

4:15 PM
L-2
(HSP)
Sze-chuan Suen, BS, BA, Stanford University, Palo Alto, CA, Eran Bendavid, MD, MS, Stanford University, Stanford, CA and Jeremy D. Goldhaber-Fiebert, PhD, Stanford Health Policy, Centers for Health Policy and Primary Care and Outcomes Research, Stanford University, Stanford, CA

Purpose: Tuberculosis (TB) continues to be a public health challenge in India, which accounts for a quarter of global incident cases.  Disease control is complicated by a growing burden of multi-drug resistant (MDR) TB. Understanding the drivers of India’s future TB and MDR-TB epidemic is crucial to disease control.  We used simulation modeling to assess India’s future TB trends and the potential impacts of treatment programs.

Method: We developed a dynamic transmission microsimulation model of TB in India. Individuals were characterized by age, sex, smoking status, TB infection and disease, and whether they had drug-sensitive (DS) or MDR-TB. The model incorporated DOTS and DOTS+ treatment algorithms for DS-TB and MDR-TB respectively and empirically-observed patterns of coverage and treatment uptake. Data sources included: the United Nations Population Division, India’s National Family and Health Survey and Revised National Tuberculosis Control Program, and the published literature. We calibrated the model to India’s demographic patterns, age- and sex-specific smoking prevalence rates, overall force of TB infection, and annual estimates of TB prevalence and incidence both before and during DOTS and DOTS+ ramp-up. We examined the role played by the coverage and quality of DOTS and DOTS+ on future prevalence and incidence of MDR-TB.

Result: The model achieved good calibration for 1996-2011. Compared to a counterfactual without any DOTS, we estimated that DOTS has averted 100 million latent DS-TB infections and 3 million active TB cases in India to date. These effects differed by smoking, age, and sex.  DOTS was also associated with 7 million latent MDR-TB infections and 800,000 active MDR-TB cases through treatment default and incomplete treatment.  We estimate that MDR-TB prevalence will increase by 150% by 2036 without any changes to DOTS or DOTS+. Improving DOTS quality now could avert >80% of incident MDR-TB cases. The timing of quality improvement is influential, because over time a decreasing number of new MDR-TB cases are due to incomplete treatment and more cases result directly from transmission.

Conclusion: In India, DOTS has been associated with reducing overall TB incidence but increasing MDR-TB incidence.  At the current quality of treatment programs, MDR-TB is expected to increase in India.  Dynamic simulation models stratified by demographic and risks factors can provide timely insights to inform policymaking.

4:30 PM
L-3
(HSP)
Ankur Pandya, PhD1, Milton C. Weinstein, PhD2, Joshua A. Salomon, PhD2 and Thomas Gaziano, MD, MSc3, (1)Weill Cornell Medical College, (2)Harvard School of Public Health, Boston, MA, (3)Harvard Medical School, Boston, MA

Purpose:  Receiver operating characteristic (ROC) curves are commonly used to evaluate diagnostic tests, but many diseases have multiple risk factors or tests that could be used to perform these analyses.  We compared ROC curves for 15 approaches (involving single or multiple risk factors or tests) of assessing cardiovascular disease (CVD) risk. 

Method:  We calculated 15 rankings of risk for 3,501 men and 2,498 women in the NHANES III population (baseline values 1988-1994) to compare ROC curves using 10-year CVD death as the outcome of interest.  There were five categories of approaches evaluated:  1) Single risk factor (age, cholesterol, body-mass index [BMI], systolic blood pressure [SBP]); 2) Number (0-7) of dichotomous risk factors (age>55 years, LDL cholesterol>130 mg/dL, SBP>140 mmHg, BMI>30 kg/m2, diabetes, smoking, SBP treatment) with single risk factors as tiebreakers (age, cholesterol, BMI, SBP); 3) Total CVD risk (based on Framingham or non-laboratory-based risk scores); 4) Multistage (Framingham risk only available for 75%, 50% or 25% of population at intermediate risk, non-laboratory-based risk used for others); and 5) Combination of Framingham and non-laboratory-based risk (additive or multiplicative) for all individuals.  Categories 1 and 2 relied on dichotomous and/or single risk factors, while Categories 3, 4 and 5 involved total risk scores.  Categories 2, 4 and 5 consisted of multiple tests. 

Result:  In men, area under the ROC curve (AUC) results ranged from 0.474 (BMI single risk factor) to 0.782 (additive combination of Framingham and non-laboratory-based total risk scores).  In women, this range was 0.556 (BMI single risk factor) to 0.834 (Framingham total risk score).  All of the Category 1, 2, and 3 scores were statistically significantly worse (p<0.05) compared to the best score in each sex, except for age alone in men (AUC = 0.772), Category 2 tests with cholesterol or SBP as tiebreakers in women (AUCs of 0.807 and 0.827, respectively), and the non-laboratory-based total risk score in men (AUC = 0.782).  AUCs for multistage tests ranged from 0.774-0.780 and 0.812-0.827 in men and women, respectively.

Conclusion:  Tests involving total risk scores generally performed better than dichotomous and/or single risk factor-based tests.  In men, age as a single risk factor performed comparably to the best scores (particularly at stricter positivity thresholds).  In women, additional risk factor information beyond age significantly improved AUC results.

4:45 PM
L-4
(HSP)
Tinglong Dai, Ronghuo Zheng and Katia Sycara, PhD, Carnegie Mellon University, Pittsburgh, PA

Purpose: Deceased donors constitute the major source of transplanted organs in the U.S., but the current system for cadaveric organ donation and allocation is not effectively converting the public’s high approval of donating organs into satisfactory organ donation rates. One proposed policy change (hereafter referred to as “donor priority rule”) is to endow registered organ donors with the priority of receiving organs when in need for a cadaveric organ. This research aims to investigate the social welfare consequences of the donor priority rule.

Method: We build an analytic model of the current organ donation and allocation system using Queueing and Game Theoretic frameworks. In our model, each candidate’s utility is positively associated with the quality-adjusted life expectancy (QALE), which is determined by life expectancies before and after transplantation, quality-of-life scores before and after transplantation, and probability of receiving of an organ (as opposed to dying while on the waiting list). One significant aspect of our model is that we use rigorous heavy-traffic queueing approach to model candidates’ waiting time when the demand for organs far exceeds the sparse and random supply. This allows us to capture each individual’s decision to register as an organ donor. We characterize the equilibrium before and after adopting the policy.

Result: Different from popular beliefs and extant research findings (cf. Kessler and Roth 2012) about the role of the donor priority rule, we show that if the health status of the population is sufficiently heterogeneous, the social welfare can be reduced as a result of the donor priority rule. The main reason is that individuals with low health status might have a higher incentive to become organ donors, leading to a distorted pool of organ supply.

Conclusion: Our model is among the first to analyze individuals’ decisions to become registered organ donors. We show that although the donor priority rule invariably increases the size of the donor registry, the overall social welfare can be worse off after adopting the donor priority rule if the population is differentially healthy. Nevertheless, the social welfare will be increased when the variance of individual health status is low enough.

5:00 PM
L-5
(HSP)
Carrie C. Lubitz, MD, MPH1, Milton C. Weinstein, PhD2, G. Scott Gazelle, MD, MPH, PhD1, Pamela McMahon, PhD1 and Thomas Gaziano, MD, MSc3, (1)Massachusetts General Hospital, Boston, MA, (2)Harvard School of Public Health, Boston, MA, (3)Harvard Medical School, Boston, MA

Purpose:    Patients with primary aldosteronism (PA) comprise 17-23% of the resistant hypertensive population. Consensus guidelines for the screening and diagnosis of unilateral PA vary. We aimed to identify cost-effective strategies, including the use of CT and adrenal venous sampling (AVS), for identifying surgically correctable PA patients.

Method:    A decision-analytic model (TreeAge 2009 Software, Williamstown, MA) was used to compare the costs (testing, imaging, surgery, and discounted life-time costs of spironolactone to treat non-surgical PA) and effectiveness (SBP reduction) of six screening and lateralization (i.e. identification of surgically correctable PA) strategies for PA in 55-year-old resistant hypertensive patients with and without the use of confirmatory saline-infusion test (SIT, following positive screening aldosterone to renin ratio), abdominal CT, and/or adrenal venous sampling (AVS). Patients diagnosed with unilateral PA underwent laparoscopic adrenalectomy; patients identified to have PA but who did not lateralize were given spironolactone. Estimates of differential changes in SBP for patients undergoing surgery or adding spironolactone and for those with PA versus non-PA resistant hypertension were based on prospective data from the literature. Costs were based on 2011 Medicare reimbursement schedules and Red Book: PDR. The primary outcome was cost (2011 US$) per change in SBP (mmHg). Sensitivity analyses were performed.

Result:      Strategies with AVS strongly dominated strategies without AVS (Table 1). Three AVS strategies were on the efficient frontier. Although no conventional willingness to pay threshold for cost per change in SBP exists, proceeding to AVS following a positive screen for PA is cost-effective at a threshold of $1661.39 per mmHg or more. The strategies on the efficient frontier were stable across ranges of effectiveness (changes in SBP) and diagnostic accuracy.  

Conclusion:    Of the tested surgical strategies, proceeding directly to lateralization with AVS from a positive screening test yields the most SBP reduction, but a strategy of using CT prior to AVS was also efficient. Given that PA patients have increased reversible cardiovascular risks and decreased quality of life in comparison to matched non-PA hypertensive patients, changes in SBP will likely have a greater impact on PA patients. Further modeling should explore the lifetime secondary differential effects of continued hypertension in PA patients, comparisons of surgical strategies to medical therapy alone, and differential health-related quality of life of medical versus surgical strategies.

5:15 PM
L-6
(HSP)
Matthew S. Simon, MD1, Jared A. Leff, MS1, Melissa M. Cushing, MD1, Beth Shaz, MD2, David P. Calfee, MD1 and Alvin I. Mushlin, MD, ScM1, (1)Weill Cornell Medical College, New York, NY, (2)New York Blood Center, New York, NY
Cost-Effectiveness of Blood Donor Screening for Babesiosis in Endemic Regions

Purpose: Babesiosis is the most common transfusion-transmitted infection in the US and frequently results in severe or fatal illness in immunocompromised blood recipients.  Blood donor screening assays are currently investigational and not widely employed in endemic areas.  We evaluated the cost-effectiveness of 4 screening strategies for prevention of transfusion-transmitted babesiosis.

Methods:   A decision analytic model compared the cost-effectiveness of screening using (1) questionnaire (status quo) (2) universal immunofluorescence antibody (IFA) assay (3) universal IFA and polymerase chain reaction (PCR) and (4) recipient risk-based targeting whereby a proportion of blood is IFA/PCR screened and reserved for immunocompromised recipients. Data were from published sources, including the recently published 1 year experience of risk-based targeting at the Rhode Island Blood Center.   A societal perspective with a time horizon of 1 year was adopted.  Outcomes included screening and treatment costs, quality-adjusted life years (QALYs), and incremental cost-effectiveness (CE) ratios ($/QALY). Uncertainty was evaluated through 1-way, 2-way and probabilistic sensitivity analysis. 

Results:   In the base case, IFA screening had a CE ratio of $12,400 compared to status quo, IFA and PCR had an incremental CE ratio of $103,700 and the targeted strategy was excluded due to extended dominance.  In 1-way sensitivity analyses the optimal screening strategy was sensitive to prevalence, testing costs, and the likelihood of donor window period infection.    In probabilistic sensitivity analysis at a threshold of $100,000/QALY, IFA/PCR screening had a 55.7% probability of being the optimal strategy at 0.58% base case prevalence versus 2.1% at 0.1% prevalence and 91.5% at 1.4% prevalence.

Conclusions:   Where babesia prevalence exceeds 0.1%, the CE ratio for IFA screening provides significantly better value for money than questionnaire and at prevalence exceeding 0.6% the incremental CE ratio for IFA/PCR screening is more attractive than many currently adopted blood safety interventions (Figure).  More information on epidemiology and the accuracy of screening assays is needed to inform the optimal strategy for a national policy, but our results demonstrate a cost-effective means to improve blood safety in endemic areas.