Tuesday, October 20, 2015: 1:30 PM - 3:00 PM
Grand Ballroom B (Hyatt Regency St. Louis at the Arch)

1:30 PM

Sze-chuan Suen, MS1, Margaret L. Brandeau, PhD1 and Jeremy D. Goldhaber-Fiebert, PhD2, (1)Department of Management Science and Engineering, Stanford University, Stanford, CA, (2)Stanford Health Policy, Centers for Health Policy and Primary Care and Outcomes Research, Department of Medicine, Stanford University, Stanford, CA

Purpose: Effectively treating tuberculosis (TB) requires administering drugs to which the infection is not resistant. Though costly, drug sensitivity testing (DST) of patients receiving first-line treatment can triage those with multi-drug-resistant (MDR) TB to appropriate but expensive treatment alternatives. In India, patients receive DST if they have not responded to four months of treatment, as measured by the imperfect but inexpensive sputum smear (SS) test. We seek to determine the optimal time to administer DST and the patterns of SS results that should prompt DST. If DST is administered too soon, many patients without MDR TB will be unnecessarily tested. If administered too late, patients with MDR TB may continue to transmit disease and experience declining health.

Method: We use a partially observed Markov decision process (POMDP) to determine the optimal timing and frequency of SS test information collection and DST testing in India. We calculate parameters such as patient response to treatment, patient dynamics while on treatment (the possibility of default or death), and discounted lifetime costs and health benefits using clinical studies and our previously published TB microsimulation model. We solve the POMDP using value iteration on a constrained feasible belief set.

Result: Current policy appears suboptimal given India's relatively high national estimates of MDR TB prevalence and transmission. For these estimates, DST should be administered to all patients at the outset of treatment. We project that this could save $7800 per TB patient in discounted net monetary benefits after accounting for averted downstream transmissions. However, in settings where the risk of transmission or MDR prevalence is much lower than the national average, a patient's SS result sequence can change the optimal DST timing, and individually tailored testing policies would be optimal.  See Figure 1: national averages for India lie in the white region but districts with low MDR prevalence or transmission have different optimal policies that vary by patient SS outcomes. 

Conclusion:  India should revise the drug sensitivity testing protocol in their first-line national TB treatment program to provide DST during the first month of treatment in areas of average or high MDR TB prevalence and transmission, and may wish to consider individually tailored DST regimens in low transmission, low MDR prevalence areas to reduce financial costs.

1:45 PM

William W. L. Wong, Ph.D.1, Petros Pechlivanoglou, MSc, PhD1, Aysegul Erman2, Yasmin Saeed, BScPhm2, Mina Tadrous, PhD3, Mona Younis2, Noha Zaki Rayad, PhD4, Joanna M. Bielecki, BSC, MISt1, Valeria E. Rac, MD, PhD1 and Murray D Krahn, MD, MSc, FRCPC1, (1)Toronto Health Economics and Technology Assessment (THETA) Collaborative, University of Toronto, Toronto, ON, Canada, (2)Leslie Dan Faculty of Pharmacy, University of Toronto, Toronto, ON, Canada, (3)The Ontario Drug Policy Research Network, St. Michael's Hospital, Toronto, ON, Canada, (4)Toronto, ON, Canada
Purpose: An estimated 240 million people worldwide are chronically infected with the hepatitis B virus. Of those infected with CHB, 40% will silently progress to liver cirrhosis and are at risk of dying prematurely of liver failure or liver cancer.  The objective of this review is to identify and synthesize the available Randomized Controlled Trials (RCTs) evidence investigating the comparative effectiveness and safety between the available CHB treatments (standard interferon, pegylated-interferon, adefovir, lamivudine, entecavir, telbivudine, and tenofovir) in treatment-naive individuals.

Method: Databases (PubMed, Embase, Cochrane Library and Web of Science) were searched for RCTs investigating the therapy in hepatitis B e antigen (HBeAg) positive and/or HBeAg negative patients with CHB published in English before October 29, 2014.  Network meta-analyses (NMA) were conducted to estimate pooled effectiveness and safety data using the following outcomes:  1) Efficacy for HBeAg positive patients: virologic response; alanine aminotransferase (ALT) normalization; HBeAg loss; HBeAg seroconversion; and HBsAg loss; 2) Efficacy for HBeAg negative patients: virologic response and ALT normalization; and 3) Safety: Serious adverse events, any adverse events, and withdrawal due to adverse events.

Result: A total of 62 studies were selected for inclusion. In HBeAg positive patients, tenofovir was most effective in achieving virologic response (predicted probability: 86%); tenofovir was found significantly better than adefovir, telbivudine, entecavir, pegylated interferon and interferon.  In terms of other efficacy outcomes (i.e. ALT normalization; HBeAg seroconversion; and HBsAg loss), tenofovir was not significantly better than the other treatments.  In HBeAg negative patients, tenofovir was the most effective in achieving virologic response (98%); tenofovir was significantly better than adefovir, telbivudine, pegylated interferon and interferon.  Tenofovir was not significantly better than the other treatments for ALT normalization.  There was no significant difference between tenofovir and other oral agents for safety outcomes.

Conclusion: Our study found that for HBeAg positive patients, tenofovir is the most effective treatment followed by entecavir, for the outcomes of virologic response and ALT normalization; for HBeAg loss and HBeAg seroconversion, pegylated interferon is the most effective treatment followed by tenofovir. For HBeAg negative patients, tenofovir is also the most effective treatment, followed by adefovir and entecavir, in terms of virologic response and ALT normalization. Current practice guidelines should be informed by cost-effectiveness and patient perspectives, in addition to evidence regarding effectiveness and safety.

2:00 PM

Natasha Nanwa, MSc1, Beate Sander2, Murray D Krahn, MD, MSc, FRCPC3, Nick Daneman, MD, MSc4, Hong Lu, MSc, PhD5, Peter C. Austin, PhD5, Anand Govindarajan, MD, MSc6, Laura Rosella, MPH, PhD7, Suzanne Cadarette, MSc, PhD1 and Jeffrey Kwong2, (1)Leslie Dan Faculty of Pharmacy, University of Toronto, Toronto, ON, Canada, (2)Public Health Ontario, Toronto, ON, Canada, (3)Toronto Health Economics and Technology Assessment (THETA) Collaborative, University of Toronto, Toronto, ON, Canada, (4)Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada, (5)Institute for Clinical Evaluative Sciences, Toronto, ON, Canada, (6)Mount Sinai Hospital, Toronto, ON, Canada, (7)Dalla Lana School of Public Health, Toronto, ON, Canada

To assess attributable health and cost outcomes associated with community-acquired Clostridium difficile infection (CDI). 


We conducted a population-based matched cohort study. Between 01/01/2003 and 31/12/2010, we identified incident cases of community-acquired CDI (infected patients) defined as patients with the ICD-10-CA code A04.7 present during: an emergency department (ED) visit (principal/non-principal diagnosis, index date: ED registration date); a non-elective hospital admission (principal/non-principal diagnosis) with length of stay ≤2 days (index date: hospital admission date); or a non-elective hospital admission (principal diagnosis) with CDI symptoms (e.g., diarrhea) documented during a physician or ED visit within two weeks prior to the hospital admission date (index date: physician visit or ED registration date). We followed infected patients until 31/12/2011. Infected patients were matched 1:1 without replacement to uninfected subjects in the general population using propensity score and hard matching on a set of baseline characteristics. Health outcomes included colectomy within 1-year post index date and all-cause mortality. Cost outcomes (from the healthcare payer perspective in 2012 Canadian dollars) included phase-specific costs, up-to-1-year costs unadjusted for survival, and up-to-3-year costs adjusted for survival.


We identified 7,903 patients infected with community-acquired CDI. The crude mean annual incidence was 7.8 per 100,000. The mean age was 63.5 years (standard deviation=22.0) and 63% were female. The relative risk for undergoing a colectomy within 1-year post index date was 5.53 (95% confidence interval [CI], 3.30-9.27) and the relative risk for mortality within 1-year post index date was 1.58 (95%CI, 1.44-1.75). Infected patients had 1.3- to 5.3-fold higher mean costs versus uninfected subjects. The mean attributable cost (adjusted for survival) of an incident community-acquired CDI patient was $8,881 (95%CI: $7,951-$9,904) in the first year, $2,663 in the second year, and $2,480 in the third year. Mean attributable costs were generally higher among those diagnosed in 2010 (possibly due to a virulent strain), males, those aged ≥65 years, and those who died within 1-year after the index date.


Community-acquired CDI is associated with a substantial health and economic burden. CDI leads to a greater risk for colectomy and all-cause mortality, and higher short- and long-term healthcare costs. This is the first study to evaluate the costs of community-acquired CDI using a large population-based sample and to evaluate long-term costs of community-acquired CDI.

2:15 PM

Richard K. Zimmerman, MD, MPH1, GK Balasubramani, PhD2, Mary Patricia Nowalk, PhD1, Stephen R. Wisniewski, PhD3, Arnold Monto, MD4, Huong McLean, MPH, PhD5, Ryan E Malosh, PhD4, Michael L. Jackson, PhD, MPH6, Lisa A. Jackson, MD, MPH6, Manjusha Gaglani, MBBS7, Lydia Clipper, BSN8, Edward Belognia, MD5 and Brendan Flannery, PhD9, (1)University of Pittsburgh School of Medicine, Pittsburgh, PA, (2)University of Pittsburgh, Pittsburgh, PA, (3)University of Pittsburgh Graduate School of Public Health, Pittsburgh, PA, (4)Ann Arbor, MI, (5)Marshfield, WI, (6)Seattle, WA, (7)Baylor Scott & White Health, Texas A&M HSC COM, Temple, TX, (8)Temple, TX, (9)Atlanta, GA
Purpose: Despite the burden of influenza, the use of neuraminidase inhibiting anti-viral medication is relatively infrequent.  Rapid, cost-effective methods for determining the likelihood of influenza may help identify patients for whom antiviral medications will be most beneficial (high risk condition, ≥65 years old, and presenting for treatment within 48 hours of symptom onset).  Clinical decision algorithms are a rapid, inexpensive method to evaluate probability of influenza, but to date, most algorithms are based on regression analyses that do not account for higher order interactions.  This study used classification and regression trees (CART) modeling to estimate probabilities of influenza.

Method: 4,173 individuals ≥ 5 years of age who presented at ambulatory centers for treatment of acute respiratory illness (≤ 7 days) with cough or fever in 2011-2012 were included.  Eligible enrollees provided nasal and pharyngeal swabs for real-time, reverse transcriptase polymerase chain reaction (RT-PCR) testing for influenza, self-reported symptoms, personal characteristics and self-reported influenza vaccination status.  CART was used to develop a series of models with prediction success of sensitivity, specificity, positive predictive values (PPV), negative predictive values (NPV), and areas under the curve (AUC) calculated.

Result: 645 enrollees <65 years and 60 enrollees ≥65 years had PCR-confirmed influenza.  Antiviral medication was prescribed for 14% of those individuals.  Among nine possible clinical features, CART selected the best combination (fever, cough, fatigue, and shortness of breath and household smoke)with a sensitivity of 81%, specificity of 52%, PPV of 24%, NPV of 94% and AUC=0.68.  Limiting the sample to those 345 patients for whom antivirals are clearly recommended i.e., individuals <65 years with a high risk condition or ≥65 years, and who presented for care ≤2 days from symptom onset,  presence of fever and cough resulted in a prediction algorithm with 86% sensitivity, 47% specificity, 27% PPV, 95% NPV and AUC=0.67.

Conclusion: The algorithm based on CART recursive partitioning, among outpatients ≥5 years, was used to estimate probability of influenza with good sensitivity and high NPV, but low PPV in an influenza season with low prevalence of disease.  After further testing for seasons with higher influenza prevalence, CART may be used to exclude many who do not need antivirals, and indicate who should be considered for viral testing for confirmation of influenza.

2:30 PM

Mark Eckman, MD, MS1, Gregory Lip, MD2, Ruth Wise, MSN, MDes1, Anthony Leonard, PhD3, Barbara Speer, BS3, Megan Sullivan, MS4, Nita Walker, MD5, Matthew Flaherty, MD6, Brett Kissela, MD, MS7, Peter Baker, BS8, Dawn Kleindorfer, MD6, John Kues, PhD9, Robert Ireton, BS8, Dave Hoskins8, Brett Harnett, MS-IS8, Carlos Aguilar, MD, MS10, Lora Arduser, PhD11, Dylan Steen, MD12 and Alexandru Costea, MD12, (1)University of Cincinnati, Division of General Internal Medicine and Center for Clinical Effectiveness, Cincinnati, OH, (2)University of Birmingham, Birmingham, United Kingdom, (3)University of Cincinnati, Department of Family and Community Medicine, Cincinnati, OH, (4)UC Health, Cincinnati, OH, (5)University of Cincinnati, Division of General Internal Medicine, Cincinnati, OH, (6)University of Cincinnati, Department of Neurology, Cincinnati, OH, (7)University of Cincinnati, Cincinnati, OH, (8)University of Cincinnati, Department of Biomedical Informatics, Center for Health Informatics, Cincinnati, OH, (9)University of Cincinnati, Department of Community and Family Medicine, Cincinnati, OH, (10)University of Cincinnati, Division of General Internal Medicine and Center for Health Informatics, Cincinnati, OH, (11)University of Cincinnati, Department of English, Cincinnati, OH, (12)University of Cincinnati, Division of Cardiology, Cincinnati, OH
Purpose: Among patients with atrial fibrillation (AF), female gender has been associated with both an increased risk of stroke and paradoxically a decreased likelihood of receiving anticoagulant therapy. There also is a perception that the elderly are less likely to receive anticoagulant therapy due to concerns about falling and frailty. We wished to assess the appropriateness of antithrombotic therapy among women and the elderly, looking for patterns of either under-treatment or unnecessary treatment.

Method: Retrospective cohort study of 1,586 adults with non-valvular AF or flutter seen in primary care settings of an integrated healthcare system between December 2012 and March 2014. Treatment recommendations were made by an Atrial Fibrillation Decision Support Tool (AFDST) based on projections for QALE calculated by a decision analytic model that integrates patient-specific risk factors for stroke and hemorrhage and examines strategies of no antithrombotic therapy, aspirin, or oral anticoagulation.


Current treatment was discordant from recommended treatment in 45% (326/725) of women and in 39% (338/860) of men (p = 0.02). Among the elderly (age ≥ 85) current treatment was discordant from recommended treatment in 35% (89/258), while treatment was discordant among 43% (575/1328) of patients < 85 years of age (p = < 0.01). We further examined age categories in 5-year increments and found that discordant therapy was as high as 60-70% in those between the ages of 31 and 50. Among 326 women with discordant treatment 99% (322/326) was due to under-treatment and 1% (4/326) was due to overtreatment. Among 338 men with discordant treatment 81% (274/338) was due to under-treatment, while 19% (64/338) was due to overtreatment. Among 89 elderly patients with discordant treatment 98% (87/89) of discordance was due to under-treatment and 2% (2/89) was due to overtreatment, whereas in those < 85 years of age, 88% (509/575) was due to under-treatment and 12% (66/575) of was due to overtreatment.


Women are still undertreated with antithrombotic therapy for AF. Somewhat surprisingly, compared with older patients, a larger proportion of patients < 85 years of age are receiving treatment that is discordant from recommended therapy. Furthermore, in women and the elderly the major reason for discordant therapy is under-treatment; whereas in men and younger patients, a larger proportion of discordance is due to overtreatment.