|BEC||Behavioral Economics||ESP||Applied Health Economics, Services, and Policy Research|
|DEC||Decision Psychology and Shared Decision Making||MET||Quantitative Methods and Theoretical Developments|
* Candidate for the Lee B. Lusted Student Prize Competition
Purpose: Bladder cancer has a heterogeneous natural history and a substantial plurality (40%) of incident cases are low grade non-muscle-invasive (NMIBC), with comparatively low risk of progression to life-threatening disease. Practice guidelines for NMIBC suggest intensive surveillance cystoscopy schedules with a limited evidence base, and there is a lack of consensus among the different guidelines for low risk NMIBC.
Method: We use a Partially Observable Markov Decision Process (POMDP) to investigate the optimal schedule of cystoscopies that maximizes expected quality adjusted life years (QALYs). Our model classifies patients into three risk levels with transition probabilities for health states taken from the EORTC risk calculator’s recurrence and progression probabilities. Mortality rates are taken from the CDC Vital Statistics Report, and parameters for utility of health states, and disutility of cystoscopy are drawn from the medical literature. Model validation is based on comparison of outputs to published survival data for patients diagnosed with bladder cancer.
Result: We compared the optimal schedule of cystoscopies from our model with the American Urology Association (AUA) and the European Association of Urology (EAU) guidelines for male and female patients aged 50 to 70. The optimal schedule for the base case scenario results in a 0.4 gain in expected QALYs over EAU and AUA guidelines for a 50 year old low risk male patient. Base case results indicate that older patients should receive less intensive surveillance than younger patients and female patients should undergo slightly more intensive surveillance than similar male patients. Optimal schedules are more intensive than EAU, and less intensive than AUA in the first 5 years of surveillance. Sensitivity analysis indicates that the optimal schedule is highly sensitive to the disutiltity of cystoscopy. For example, the total number of cystoscopies in the first 10 years increases from 10 to 40 when the disutility of cystoscopy drops from 0.05 to 0.01.
Conclusion: Whereas current American guidelines recommend a one-size-fits-all regimen, current European guidelines are based on explicit risk stratification, underscoring uncertainty in this area. We find that surveillance for low risk NMIBC patients should consider patient age, gender, co-morbidity and most of all, disutility of cystoscopy. Optimal schedules can result in considerable QALY gains, particularly for younger patients, compared to current guidelines
Purpose: To measure the cost of no-shows and benefit of no-show interventions and overbooking for an outpatient endoscopy suite.
Method: We used a discrete event simulation model based on an outpatient endoscopy suite at UNC Hospital in Chapel Hill, NC, to measure the effect of no-shows on expected net gain. Expected net gain is defined as the difference in expected revenue based on CMS reimbursement rates and variable costs based on the sum of patient waiting time and provider and staff overtime. To build the model, we used a combination of historical time stamp data and time studies to estimate probability distributions for all parts of the endoscopy process including intake, procedure, and recovery times. No-show rates were estimated from historical attendance (18% on average). We used reported improvements in no-show rates from published intervention studies, such as phone reminders, and pre-assessment clinics, with relative reductions in no-show rates ranging from 34.5% to 75.5% to measure their associated effects on expected net gain. In addition to no-show interventions, we evaluated the effectiveness of scheduling additional patients (overbooking) on the expected net gain. We compared interventions and overbooking to a perfect attendance scenario of n=24 patients (the reference scenario) on the basis of expected net gain.
Result: The daily expected net gain with perfect attendance (reference scenario) is $4,433.32. The daily loss attributed to the base case no-show rate of 18% is $725.42 (16.36% of net gain). This loss is sensitive to the no-show rate, ranging from $472.14 to $1,019.29 (10.65% to 22.99% of net gain) for no-show rates of 12% and 24%, respectively. The daily loss relative to the reference scenario associated with implementing no-show interventions ranges from $166.61 to $463.09 (3.76% to 10.45% of net gain). The overbooking policy of 37.5% additional patients resulted in no loss in expected net gain when compared to the reference scenario.
Conclusion: No-shows can significantly decrease the expected net gain of outpatient procedure centers. Interventions such as phone reminders and pre-assessment clinics reduce the no-show rate; but can be costly, challenging to implement, and do not resolve the problem entirely. Overbooking can help mitigate the impact of no-shows on a suite’s expected net gain and has a lower expected cost of implementation to the provider.
Purpose: To analyze the decision for trial of labor after one prior cesarean compared with elective repeat cesarean, considering outcomes in future pregnancies in the analysis.
Method: A decision analytic model was designed from the maternal perspective comparing elective repeat cesarean delivery (ERCD) and trial of labor after cesarean (TOLAC). Baseline assumptions included a theoretical cohort of 300,000 women who had experienced only one prior pregnancy delivered via cesarean. Outcome probabilities were derived from the literature for major morbidities, including uterine rupture, maternal death, neonatal death, cerebral palsy, hysterectomy, and future placenta accreta. Costs and utilities taken from the literature were also applied to outcomes. Univariate and multivariate sensitivity analyses on key variables as well as a Monte Carlo simulation were performed for model validation.
Result: ERCD was associated with more accretas ( 903 vs. 655) and more cesarean hysterectomies (2049 vs. 1602) but fewer uterine ruptures (2693 vs. 0) than TOLAC. Overall, TOLAC was the preferred strategy, resulting in 3,900 additional QALYs for the entire cohort. A one-way sensitivity analysis found the risk of uterine rupture must reach 3.6% before performing an elective repeat cesarean becomes preferred. TOLAC was also cost-saving, costing $1380 less per delivery, for a total cost savings of $414M for the cohort. Even when the model was limited to the 2nd pregnancy, a trial of labor remained the dominant strategy, requiring a threshold of 2.7 % for uterine rupture before elective cesarean became the preferred option. Sensitivity analyses and a Monte Carlo simulation validated the robustness of the model over a broad range of inputs.
Conclusion: TOLAC leads to better outcomes on average than ERCD for women with one prior cesarean even without a history of prior vaginal births. The model's preference for TOL is magnified if future pregnancies are anticipated, given the potential morbidity of future placental abnormalities. TOLAC is also cost saving. Table: Cost-effectiveness of TOLAC versus ERCD after one prior CD
|Scenario||Additional QALYs in TOLAC over ERCD||Decreased cost in TOLAC over ERCD|
|All second pregnancies||+3,900||-$414,000,000|
|No third pregnancy||+2,700||-$248,000,000|
|Third pregnancy assumed||+5,400||-$561,000,000|
Purpose:We investigate the use of statistical models to identify surges in emergency department (ED) volume based on the level of utilization of physician capacity. Our models may be used to guide staffing decisions in non-crisis related patient volume increases.
Method:Patient visits to a large urban teaching hospital with a Level 1 trauma center were collected from July 2009 – June 2010. A comparison of significance was used to assess the impact of multiple variables on the state of the ED. Historical physician utilization data was used to model physician capacity. Binary logistic regression analysis was used to predict the probability that the physician capacity would be sufficient to treat all patients forecasted to arrive. The predictions were performed by various time intervals: 15 minutes, 30 minutes, 1 hour, 2 hours, 4 hours, 8 hours and 12 hours. The models were validated against 5 consecutive months of similar patient data from July – November 2010. Models and forecast accuracy were evaluated by positive predictive values, Type I and Type II errors, and real-time accuracy in predicting non-crisis surge events.
Result:The ratio of new patients to treat to total physician capacity - termed the “Care Utilization Ratio (CUR)” - was deemed to be a robust predictor of the state of the ED (with a CUR ratio greater than 1 indicating that the physician capacity is not sufficient to treat all patients forecasted to arrive). Among the models investigated, prediction intervals of 30 minutes, 8 hours and 12 hours performed best with deviances of 1.000, 0.951 and 0.864 respectively. The models were validated against the July – November 2010 data set using significance of 0.05. For the 30-minute prediction intervals, the positive predictive values ranged from 0.738 to 0.872, true positives ranged from 74% to 94%, and true negatives ranged from 70% to 90% depending on the threshold used to determine the state of the ED.
Conclusion:We identified a new and robust indicator of the system’s performance: CUR. By investigating different prediction intervals, we were able to model the tradeoff of longer time to response versus shorter but more accurate predictions. Our proposed models would’ve allowed for an earlier identification of surge in patient volume on “non-crisis” days than current practice.
Purpose: Trauma centers (TC) reduce mortality by 25% for severely injured patients but cost significantly more than non-trauma centers (NTC). The CDC’s 2009 prehospital emergency medical services (EMS) guidelines seek to reduce undertriage of these patients to NTC to <5% and reduce overtriage of minor injury patients to TC to <25%. We assessed the cost-effectiveness of improving prehospital trauma triage in U.S. regions with <1 hour EMS access to TCs (84% of the population).
Method: We developed a decision-analytic Markov model to evaluate improvements in prehospital trauma triage given a baseline undertriage rate of major injury patients to NTC of 20% and overtriage rate of minor trauma patients to TC of 50%. The model follows patients from injury through prehospital care, hospitalization, first year post-discharge, and the remainder of life. Patients are trauma victims with a mean age of 43 (range: 18-85) with Abbreviated Injury Scores (AIS) from 1-6. Cost and outcomes data were derived from the National Study on the Costs and Outcomes of Trauma for patients with moderate to severe injury (AIS 3-6), National Trauma Data Bank, and published literature for patients with minor injury (AIS 1-2). Outcomes included costs (2009$), quality adjusted life-years (QALY), and incremental cost-effectiveness ratios.
Result: Reducing undertriage rates from 20% to 5% would yield 3.9 QALYs gained per 100 patients transported by EMS. Reducing overtriage rates from 50% to 25% would save $108,000 per 100 patients transported. Reducing both undertriage to 5% and overtriage to 25% would be cost-effective at $13,300/QALY gained and yield 3.9 QALYS per 100 patients. One could spend $196,000 per 100 patients transported to reduce undertriage to 5% and overtriage to 45% and still achieve an incremental cost-effectiveness ratio below $50,000/QALY. Results were somewhat sensitive to scenarios in which severely injured patients benefited less than expected from treatment at a TC relative to at a NTC or the cost difference of treating patients with minor injures at TCs and NTCs were smaller than expected.
Conclusion: Reducing prehospital undertriage of trauma patients is cost-effective and reducing overtriage of minor injury patients is cost-saving provided patients with minor injuries do not suffer worse outcomes from treatment at NTCs. With approximately 4.5 million annual EMS trauma transports, reducing overtriage by 25% could save up to $4.8 billion/year.
Purpose: Aortic stenosis, the most common valvular disease in the elderly, is associated with high morbidity and mortality. Surgical aortic valve replacement is the only treatment option available that prolongs life. Transcatheter aortic valve implantation (TAVI) is a new technology that appears to offer dramatic improvements in the quality and quantity of life of patients with aortic stenosis not eligible for surgical valve replacement. Using the results of the multicenter, randomized control PARTNER trial, we sought to determine if TAVI is cost effective compared with medical management.
Method: We developed a decision analytic Markov model to follow cohorts of 83 year old patients with severe aortic stenosis who also shared the other baseline characteristic seen in the PARTNER RCT: >92% had New York Heart Association (NYHA) class III or IV symptoms, and all had Society of Thoracic Surgeons risk score of 10% or higher. As in the trial, TAVI reduced mortality by 23% over two years. Model costs came from Medicare and the Nationwide Inpatient Sample (2008 US$). We compared the strategies of TAVI and medical management, which included the option of balloon aortic valvuloplasty.
Result: TAVI was the most effective, but also the most expensive, treatment option providing an expected 1.98 QALYs at an average cost of $99,700 per person. In contrast, medical management resulted in 1.25 QALYs at an average cost of $63,200. Compared to medical management, TAVI cost $49,500 per QALY gained. This result was sensitive to annual health care costs in surviving patients. With a willingness to pay threshold of $100,000/QALY, TAVI was the optimal policy if health care costs other than those due to aortic stenosis were <$54,000/year. Clinically appropriate variation in other parameters, like procedural effectiveness, ongoing rates of death, and use of valvuloplasty in the medical treatment arm, had only modest effects on estimated cost-effectiveness. Furthermore, TAVI resulted in 56% of the cohort’s remaining life being spent with NYHA class I or II symptoms, instead of class III or IV symptoms. Depending on the extent of valvuloplasty use, the cohort receiving medical management was asymptomatic 0 to only 45% of the time.
Conclusion: TAVI appears to be a cost-effective treatment for patients with symptomatic aortic stenosis who are not candidates for surgery.