|BEC||Behavioral Economics||ESP||Applied Health Economics, Services, and Policy Research|
|DEC||Decision Psychology and Shared Decision Making||MET||Quantitative Methods and Theoretical Developments|
* Candidate for the Lee B. Lusted Student Prize Competition
Purpose: Modeling adherence with colorectal cancer (CRC) screening is challenging due to limited data on longitudinal adherence patterns. We assessed whether the manner in which imperfect adherence is simulated affects model-predicted conclusions about the effectiveness and cost-effectiveness of CRC screening.
Method: Using a previously-developed microsimulation model of CRC, we predicted the fraction of 50-year-olds ever screened as well as the life-years gained (LYG), lifetime costs, and incremental cost-effectiveness ratios (ICERs) for two CRC screening strategies: five-yearly computed tomographic colonography (CTC) and ten-yearly colonoscopy (COL). We considered four approaches for simulating imperfect adherence (based on approaches used in the literature), each of which could be described as assuming 50% adherence: (1) fraction (50%) perfectly adherent and fraction (50%) completely nonadherent; imperfect random adherence at a constant rate (50%) (2) without and (3) with dropout; and (4) heterogeneity in imperfect adherence with constant rates within population subgroup (population average 50%).
Result: The fractions ever screened were 50% for scenarios 1 and 3, and higher for at least one strategy in scenarios 2 and 4 (Table). In scenarios 1 and 3, COL was more effective than CTC, while in scenarios 2 and 4 CTC was more effective. CTC was the most costly strategy in scenarios 1 and 4 and less costly than COL in scenarios 2 and 3. CTC was dominated in scenario 1, COL was dominated in scenarios 2 and 4, and in scenario 3 the ICER of COL vs. CTC was $8,900/LYG.
Conclusion: The manner in which imperfect adherence is simulated affects the model-predicted relative effectiveness, cost, and cost-effectiveness of CTC vs. COL screening for CRC. To clarify the implications of adherence assumptions in the context of repeated screening, we recommend that modelers report the fraction of the population ever screened with each modality, as well as findings assuming perfect adherence. While unrealistic, the latter output enables direct comparison of alternative screening options among those willing to be screened and facilitates comparisons across models.
Purpose: Medication adherence among chronic disease patients has been shown to improve outcomes, which in turn results in reduced overall healthcare costs. A comprehensive understanding of the predictors of adherence is essential to formulate targeted strategies for improving adherence. Existing methods have not considered evaluation of heterogeneous impacts of adherence predictors at different parts (quantiles) of the adherence distribution as defined by medication possession ratio. Using the novel econometric framework of unconditional quantile regression (UQR), this study assesses the heterogeneity of impacts of the adherence predictors for an Alzheimer’s disease (AD) population.
Method: This retrospective claims analysis identified AD patients from a large US health plan that initiated oral AD therapy (rivastigmine, donepezil, galantamine, or memantine) between 1/1/2006 and 12/31/2007. Baseline characteristics were assessed during the 6-month pre-index period; medication adherence was assessed during the 1-year post-index period. UQR was estimated at 10th, 20th, …, 90th quantiles. Predictors of adherence identified from the data included age, age squared, male gender, interaction of age and gender, indicator of mental health insurance coverage, region, commercial vs. Medicare insurance, log cost, comorbidity, and formulary tier for the AD medication.
Result: Baseline medication count was positively associated with adherence (p<0.05) in the upper half of the adherence distribution. Having mental health coverage is negatively associated with adherence in all but the 10th and 20th quantiles but the impact was substantially higher in the first half of the adherence distribution. Baseline (log) cost was positively associated with adherence in the 40th and upper quantiles of the adherence distribution. For patients in the 80th and 90th quantiles, the number of baseline office visits predicted lower adherence. Compared to patients from the East, patients from the South were less likely to be adherent in the 60th and 70th quantiles.
Conclusion: The study results underscore that the predictors can have heterogeneous impacts on different parts of the adherence distributions, that is, predictors of a highly adherent subject differ from a medium- or low-adherent subject. The complete picture of the impacts of the predictors on the entire medication adherence distribution will help the decision-maker to formulate actionable policy to improve adherence.
Purpose: The concept of a cost-effectiveness “threshold” has been adopted either explicitly or implicitly by health care decision makers in numerous jurisdictions. This paper demonstrates that, under very weak assumptions – applicable to all real-world health systems – decision makers ought to instead adopt two cost-effectiveness thresholds.
Method: A simple model of a hypothetical health care system is used to demonstrate the appropriate threshold(s) under various assumptions concerning: 1) the size of the health care budget; 2) the extent to which technology, productivity and/or input prices change over time; 3) whether the amount of information available to decision makers changes over time; and 4) the fixity of the set of adopted health care technologies in the short term.
Result: The assumptions which must hold for two thresholds to be appropriate are that: a) there is some fixity in the set of adopted health care technologies in the short term, and b) either 1) technology, productivity and/or input prices change over time, or 2) the information available to decision makers changes over time, or both. Where these assumptions hold, one threshold ought to be used when appraising technologies with positive incremental costs (investment decisions), while a different threshold should be used when appraising technologies with negative incremental costs (disinvestment decisions). This is true regardless of the marginality of the technologies under consideration.
Conclusion: This finding has profound implications for the practice of cost-effectiveness analysis, for ongoing and future empirical research into the nature of the threshold, and for health care policy making. It gives a theoretical underpinning to observations that the ICERs of technologies disinvested at the margin differ from those of technologies adopted at the margin. It also has implications for the interpretation of ICERs, for the appropriate calculation of net benefit, and for the conduct of value of information (VOI) analysis.
Purpose: When a major study finds that a widely used medical treatment is no better than a less expensive alternative, do physicians stop using it? The COURAGE trial (NEJM 2007) found that percutaneous coronary intervention (PCI) is no better than an inexpensive regimen of medical therapy for patients with stable angina. We examine the impact of COURAGE on PCI use.
Methods: We developed a theoretical model of the impact of comparative effectiveness research on costs. The impact depends on: the difference in prices between comparison treatments, current practice patterns, and the impact of evidence on practice patterns. We hypothesize that physicians paid via fee-for-service will be less responsive to studies that recommend abandoning profitable treatments. We show that under these conditions, the expected value of a potential CER study on costs may be positive (i.e. cost-increasing) even if a finding for the less expensive treatment is more likely. The COURAGE trial affords an opportunity to examine how practice patterns change in response to “negative” results. We examine the impact of COURAGE on use of PCI from 2006 to 2009 using 100% patient discharge samples from hospitals in 5 large states (AZ, CA, FL, MA, NJ), Veterans Administration (VA) hospitals, and English hospitals. US community cardiologists are paid via fee-for-service. VA and English cardiologists are salaried.
Results: The figure shows trends in PCI volume. PCI volume in patients with stable angina declined by 19% is US community hospitals and 14% in VA hospitals from 2006 to 2007. However, many patients with stable angina continue to receive PCI as primary therapy. There was no decline in PCI volume in England, possibly reflecting lower baseline use, pent-up demand, and expansions in PCI capacity over this period.
Conclusions: Comparative effectiveness research can reduce costs, but savings will not be fully realized if physicians are reluctant to abandon profitable treatments. We do not find support for the hypothesis that fee-for-service medicine blunted the impact of COURAGE in the US. Increasing use of medical therapy may require switching from a procedural-based system to a more integrated approach (e.g., accountable care organizations).
Purpose: Significant resources are allocated to quality improvement (QI), yet little is known about the costs and benefits of QI adherence. We developed a framework for measuring the value of QI activities and provide a worked example using the 2006 Healthcare Effectiveness Data and Information Set (HEDIS) measures.
Method: Our framework identifies the QI measures and setting(s) of interest and synthesizes QI cost-effectiveness data. For each measure, we: (1) quantify current compliance rates; (2) review literature and abstract CE data (incremental cost-effectiveness ratio, ICER); (3) estimate per-person steady-state cost and quality-adjusted-life-year (QALY) impacts; (4) calculate ICERs at current and full compliance levels based on calculated total costs and total QALYs; (5) perform sensitivity analyses to evaluate the impact of model assumptions on results. We applied this framework to 18 widely used US HEDIS measures. We defined full compliance as 95% and considered two types of costs: those of providing the clinical service (e.g., giving the vaccination to a patient in the case of a vaccination-related QI measure) and those of improving QI compliance (e.g., efforts to convince patients to be vaccinated). We assumed that only QI-improvement costs varied with compliance, with these costs in the base-case increasing linearly with compliance and in sensitivity analyses increasing exponentially, decreasing exponentially, and not changing with compliance.
Result: In the worked example, the literature search for cost-effectiveness data of 18 HEDIS measures yielded 1,901 articles; 1,629 were excluded and the remaining 272 articles were reviewed. After applying the framework, we estimated that increasing HEDIS compliance to 95% improved health but increased cost, with our framework-calculated ICERs for the individual HEDIS measures ranging from $180/QALY (alcohol/drug dependence treatment) to $39,805/QALY (breast cancer screening), with a median of $9,791/QALY (glaucoma screening). Overall, optimizing HEDIS compliance to 95% with all 18 measures was estimated to cost $12.3 billion and to save approximately 6 million QALYs, resulting in a mean ICER $2,087/QALY.
Conclusion: We demonstrated the utility of our framework for quantifying value of QI programs like HEDIS, showing that improving compliance with such measures can be an efficient way to improve health. This framework can be a useful tool in quantifying and comparing the value of QI activities and health care interventions to aid decision-makers in resource allocation decisions.
Purpose: Authors of observational studies frequently compare their results to previously published reports. Mortality is a commonly utilized hard endpoint, and the observational study frequently demonstrates improved survival compared to the selected historical control population. In this study, we sought to determine the variability in mortality rates in published clinical trials of cardiovascular interventions.
Method: After identifying large cardiovascular clinical trials which provided long term follow up and mortality rates, we calculated age and gender adjusted mortality hazard ratios in 621 clinical trial populations utilizing a competing risk model. We then identified median and 25th and 75th percentile of the mortality hazard ratios for 9 common cardiovascular disease states.
Result: On average, patients in clinical trials evaluating stable coronary artery disease (N=165 studies) had mortality that was similar to that of the population as a whole (HR = 0.95), but the inter quartile range (IQR) was 0.76-1.22. More profound differences in IQR were found for acute myocardial infarction (N =102) (HR = 2.98, IQR 1.78-4.08) and primary prevention studies (N = 66) (HR = 0.60, IQR 0.38-0.82). There was at least a 20% difference between the first quartile and the median value for the hazard ratio for all categories studied. In addition, between 1990 and 2010, there is a 65% reduction in mortality rates for both heart failure (N = 110) and acute myocardial infarction. If a clinically significant difference in mortality is considered to be 20% or more, the observed variation in mortality hazard ratios here is so great that one can always find a control population to provide a favorable comparison. The further back in time one searches, the easier it is to find a “suitable” control population.
Conclusion: Variability in age and gender adjusted mortality hazard ratios, even for similar populations, is profound. Contemporaneously obtained controls are necessary to be valid comparators. Ultimately, the use of historical controls should find its place in history and rest in peace.