Category Reference for Presentations | |||||
---|---|---|---|---|---|

AHE | Applied Health Economics | DEC | Decision Psychology and Shared Decision Making | ||

HSP | Health Services, and Policy Research | MET | Quantitative Methods and Theoretical Developments |

* Candidate for the Lee B. Lusted Student Prize Competition

** Purpose: ** Calibrating disease natural
history models involves changing model inputs to match multiple targets. To
identify the best-fitting input set(s), the model's fit to each individual
target is often combined into an overall “goodness-of-fit” measure. We apply a
new approach which utilizes the principle of Pareto-optimality for selecting
best-fitting inputs and explore implications for cost-effectiveness analysis
and estimates of decision uncertainty.

** Methods: ** A set of model inputs is
Pareto-optimal if no other input set simultaneously fits all calibration
targets as well or better. The Pareto frontier is the set of these undominated
input sets, none of which is clearly superior to any other. Constructing the
Pareto frontier thus identifies best-fitting inputs without collapsing multiple
fits into a single measure. We demonstrate the Pareto frontier approach in the
calibration of a simple model, developed for illustrative purposes, and a previously-published
cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For
each model, we identify input sets on the Pareto frontier and the top input
sets ranked by the weighted sum of individual calibration target fits with (i) equal
weightings and (ii) a weighting emphasizing a subset of targets. We evaluate the
incremental costs and QALYs of the intervention for best-fitting input sets and
assess its cost-effectiveness.

** Results: ** After calibrating the simple model,
506 of 10,000 initial input sets were Pareto-optimal. While the 506 top-ranked
input sets under the two weighting schemes yielded results localized to
different regions of the cost-effectiveness plane, the Pareto frontier set spanned
both regions (Figure 1). This resulted in different estimates of intervention
cost-effectiveness and the level of decision uncertainty. At a
willingness-to-pay of $100,000/QALY, the intervention was cost-effective when
evaluated over the Pareto frontier, and optimal for 70% of Pareto-optimal input
sets, but not cost-effective when evaluated over top-ranked input sets under
weighting (ii) and optimal for just 17% of input sets. The intervention was
optimal for 100% of top input sets under weighting (i). Calibrating the previously-published
model also yielded differences. At a $100,000/QALY threshold, TAVR was optimal
for 38% of the 260 Pareto-optimal input sets, while it was optimal for 55% and
33% of top input sets under weightings (i) and (ii).

** Conclusions: ** The method of identifying
best-fitting input sets in model calibration has the potential to influence cost-effectiveness
conclusions.

**Economic analyses carried out to inform policy making must consider and synthesise all (relevant) evidence relating to the clinical effectiveness, patient-reported outcome measures (PROMs) and costs of the health technologies under scrutiny. Evidence based medicine says that a quantitative synthesis of the same outcome measure from multiple IPD sources is the gold standard for deriving estimates of treatment effect, a key parameter in any evaluation model. Unfortunately, in practice the evidence base is often multifaceted and fragmented, comprising a mix of aggregate (AD) and individual patient level data (IPD). This paper illustrates the methodological challenges encountered (and the solutions devised) by the authors in a recent economic model which assessed the value for money of acupuncture in chronic non-cancer related pain among primary care patients.**

**Purpose**:** Methods: ** We had access to IPD (>18,000 patients) from 28 high quality Randomised Controlled Trials (RCTs) which evaluated acupuncture (versus either sham acupuncture and/or versus usual care) in three different conditions comprising headache, musculoskeletal pain and osteoarthritis of the knee. The evidence base was chaotic, with the majority of the RCTs: (a) reporting different condition-specific (e.g. pain VAS, CMS, WOMAC) and generic PROMs (SF12, SF36, only two studies collected EQ-5D), (b) having different follow up durations, (c) failing to compare directly the relevant strategies. We developed a suite of Bayesian MTC models for the synthesis of continuous (heterogeneous) outcomes (i.e. change in adjusted pain score, change in EQ-5D), which embedded a series of mapping algorithms to predict individual specific EQ-5D values, and correlated these to the patient adjusted standardised pain scores. The analysis was carried out in WinBUGs using McMC methods, to fully characterise the relevant uncertainties while facilitating consistency checks between the direct and indirect evidence.

**Results** **: **Acupuncture (net of sham) is more effective at reducing pain and increasing EQ-5D than usual care in the management of non-cancer related chronic pain in primary care.

** Conclusions: **Bayesian

**modelling provides a flexible framework to address the challenges posed by a messy evidence base. The approach devised by the authors proved fruitful and facilitated a more robust assessment of the benefits of acupuncture, while (a) synthesising multiple heterogeneous outcomes, available at the IPD level; (b) mapping several PROMs onto the EQ5D; (c) controlling both for ‘sham effect’ and treatment effect modifiers.**

**Purpose:**

It is desirable to have a short cycle length in a discrete-time Markov model, which often requires transforming transition probabilities. Our purpose was to show that the widely used formula *â _{i }=*1- (1-

*p*)

_{i}*for converting a probability*

^{s/t}*p*over time interval

_{i}*t*into a transition probability

*â*for a Markov model with shorter cycle length

_{i}*s*is invalid for models with three or more states. We explore theoretical issues concerning the mathematically correct approach for adjusting cycle length in such models, and offer numerical approximation methods to practically solve these issues.

**Method: **

We present several examples of Markov models including ones that involve competing risk to highlight the inaccuracy of the traditional formulas. We formulate the problem of adjusting cycle lengths in Markov models mathematically as that of proving the existence of a unique root of a transition probability matrix and ensuring such root is stochastic (i.e. probabilities are nonnegative and sum up to 1). We use a simple Markov model for advanced liver disease to highlight the mathematically correct approach of finding the root of a matrix using eigendecomposition for determining adjusted transition probabilities. Further, using a previously published HIV Markov model of Chancellor e.t. al., we highlight scenarios where even eigendecomposition fails. Finally, we provide a framework with numerical approximation algorithms to practically change cycle lengths.

**Result: **

We prove that Markov models whose transition matrices are upper triangular with distinct, non-zero probabilities for the diagonal entries (which we label “progressive” Markov models) are guaranteed to have a unique matrix root. This avoids identifiability issues for transition matrices possessing multiple roots as equal candidates for the shorter cycle model. We provide conditions to determine in general if a given structure of the Markov model suffers from identifiability. We use approximation methods to convert a nonstochastic matrix to a stochastic matrix. Using the HIV and advanced liver disease example, we show that our approach leads to less bias in model outcomes when compared with the traditional (incorrect) approach.

**Conclusion: **

The traditional formula of converting transition probabilities to different cycle lengths leads to biased outcomes. We further highlight underlying challenges of finding unbiased outcomes and offer a unified framework that leads to more accurate outcomes than the traditional approach in medical decision models.

**Purpose:**To extend current available approaches for calculating the Expected Value of Partial Perfect Information (EVPPI) without requiring nested Monte Carlo simulation.

**Method: **

Expected Value of Partial Perfect Information (EVPPI) provides an upper limit for the expected gains from carrying out further research to provide information on a focal subset of parameters in a cost-effectiveness model. Calculating EVPPI requires the estimation of an inner expected net benefit over the remaining (non-focal) parameters conditional on the focal parameters. This expectation is nested within an outer expectation over the focal parameters. Since the inner expectation can only be replaced by the unconditional means of the non-focal parameters in special cases, a common general approach is to use nested Monte Carlo simulation to obtain an estimate of EVPPI. This approach is computationally intensive, can lead to significant sampling bias if an inadequate number of inner samples are obtained, and incorrect results can be obtained if correlations between parameters are not dealt with appropriately.We set out a range of methods for estimating EVPPI that avoid the need for nested simulation: reparameterisation of the net benefit function, Taylor series approximations, and restricted cubic spline estimation of conditional expectations.

**Result: **For each method, we set out the generalised functional form which net benefit must take for the method to be valid. By specifying this functional form, our methods are able to focus on components of the model where approximation is required, avoiding the complexities involved in developing statistical approximations for the model as a whole. Our methods also allow for any correlations that might exist between model parameters. We illustrate the methods using an example of fluid resuscitation in African children with severe malaria.

**Conclusion: ** Careful consideration of the functional form of the net benefit function can allow EVPPI to be calculated in a single step in a wide range of situations. Where EVPPI can be calculated in a single step, avoiding nested Monte Carlo Simulation, this leads to marked improvements in speed and accuracy.

**Purpose:**We develop a method for constructing dynamic treatment

regimes that accommodates competing outcomes by recommending sets of

feasible treatments rather than a unique treatment at each decision

point.

**Method: ** Dynamic treatment regimes model sequential clinical

decision-making using a sequence of decision rules, one for each

clinical decision. Each rule takes as input up-to-date patient

information and produces as output a single recommended treatment.

Existing methods for estimating optimal dynamic treatment regimes, for

example Q-learning, require the specification of a single outcome

(e.g. symptom relief) by which the quality of a dynamic treatment

regime is measured. However, this is an over-simplification of

clinical decision making, which is informed by several potentially

competing outcomes (e.g. symptom relief and side-effect burden.) Our

method is motivated by the CATIE clinical trial of schizophrenic

patients: it is aimed at patient populations that have high outcome

preference heterogeneity, evolving outcome preferences, and/or

impediments to preference elicitation. To accommodate varying

preferences, we construct a sequence of decision rules that output a

tailored set of treatments rather than a unique treatment. The set

contains all treatments that are not dominated according to the

competing outcomes. To construct these sets, we solve a non-trivial

enumeration problem by reducing it to a linear mixed integer program.

**Result: ** We illustrate the method using data from the CATIE

schizophrenia study by constructing a set-valued dynamic treatment

regime using measures of symptoms and weight gain as competing

outcomes. The sets we produce offer more choice than a standard

dynamic treatment regime while eliminating poor treatment choices.

**Conclusion: ** Set-valued dynamic treatment regimes represent a new

paradigm for data-driven clinical decision support. They respect both

within- and between-patient preference heterogeneity, and provide more

information to decision makers. Set-valued decision rules may be used

when patients are unwilling or unable to communicate outcome

preferences. The mathematical formalization of set-valued dynamic

treatment regimes offers a new class of decision processes which

generalize Markov Decision Processes in that the process involves two

actors: a screener which maps states to a subset of available

treatments, and a decision maker which chooses treatments from this

set. We believe this work will stimulate further investigation and

application of these processes.

**Purpose:**Current best practice guidelines for decision analytic modeling suggest the use of life tables for the derivation of all-cause mortality probabilities. These probabilities are typically assumed as fixed with no uncertainty over the model time horizon. The aim of this study is to investigate the impact of mortality projections using historical life tables on the health outcomes and the costs estimated through a decision analytic model.

**Method: ** A previously published model on the cost-effectiveness of screening a cohort of male immigrants (average 35 years of age) for Chronic Hepatitis B in Canada has been updated using projected mortality probabilities. The Lee-Carter principal component and the random walk with drift methods were applied on historical life tables (1977-2002, Statistics Canada) to derive cohort-specific and time- specific projections of the mortality probabilities. Prediction uncertainty around the mortality probabilities was captured and was incorporated in the probabilistic sensitivity analysis of the decision analytic model. The impact of mortality projections on the model outcomes at different discount rates was assessed.

**Result: **When fixed mortality probabilities and a discount rate of 5% were assumed, screening of male immigrants was associated with an improvement in quality-adjusted life expectancy (0.024 QALYs), additional costs ($1,665) and an ICER of $69,209/QALY gained. When projected mortality probabilities, using the Lee-Carter method, and a discount rate of 5% were assumed, screening was associated with larger gains in QALYs (0.027 per person) while costs remained approximately the same ($1,666 per person) resulting in an ICER of $ 59,967/QALY. For a discount rate of 3.5% the impact of the assumption on the mortality probabilities was more profound on the ICER of screening vs no screening (fixed mortality: $50,507/QALY, projected mortality: $42,364/QALY). The results were similar for the random walk with drift method.

**Conclusion: ** This study illustrates the importance of accounting for future gains in life expectancy within decision analytic models, especially when the costs of an intervention are incurred earlier in life and benefits are accrued across lifetime. In cases when the ICER estimates are compared to a threshold representing either an opportunity cost or a willingness to pay value, accounting for future gains in life expectancy could alter the interpretation of the cost-effectiveness results.