PREFERENCE ASSESSMENT AND OTHER EVALUATION METHODS
* Finalists for the Lee B. Lusted Student Prize
Method: A stated-preference survey instrument elicited contingent-behavior responses to constructed scenarios involving tradeoffs among doping-detection risks, performance levels, and financial and nonfinancial rewards. To evaluate potential stigma bias, we asked respondents both what they personally would do and a “Bayesian truth-serum” assessment of what they thought other athletes in their sport would do under the indicated circumstances. We used interval regression to estimate relative importance of changes in performance and risks. These parameters then were used to derive money equivalents of nonmonetary benefits, costs, and risks. We also elicited a utility-theoretic variant of the well-known Goldman Dilemma question on what risk of death athletes were willing to accept to win an Olympic gold medal with certainty.
Result: The maximum acceptable level of health risk varied by sport and the level of competition. We found that athletes in all surveyed sports considered nonmonetary benefits and risks to be more important than financial rewards of competition. The money-equivalent values of nonfinancial benefits and costs varied by sport and level of competition. For example, the net nonfinancial benefits of doping was about $1 million for cyclists, about $0.5 million for swimmers, and about -$25,000 for ice-hockey players.
Conclusion: This study indicates that the perceived value of benefits, health risks, and detection costs of drug diversion in sport varies strongly by sport and competition level. We also found that athletes in some sports value the nonmonetary benefits of successful use of PEDs much higher than the perceived nonmonetary costs and much higher than the financial benefits from winning. The results may suggest strategies for improving deterrence in anti-doping programs.
Method: An on-line hybrid conjoint analysis (CA)-discrete choice experiment (DCE) was designed to elicit the preferences of a purposive sample of CGS users and members of the public to (i) identify service attributes (n=13) perceived to facilitate informed decision-making; (ii) to determine relative preferences for six attributes (5 process; 1 outcome - ability to make an informed decision). A systematic review of outcome measures (n=67), semi-structured interviews/focus groups (n=52 patients/ healthcare professionals) and Delphi survey (n= 72 patients and 115 service providers) informed the attributes and levels. Respondents also rated their preferred level of involvement in decision making based on a clinical scenario and their prior experience of CGSs. A three-step approach was taken to analyse the CA and DCE data which involved linking the data from the two stated preference studies using hierarchical information integration and using ordered logit and random effects probit models. Marginal willingness-to-pay (WTP) values (and 95% CI) were calculated.
Result: The data analysis used the pooled CGS user and public samples (n=37; 76% female; mean age: 44 years; 95% in paid employment). The sample comprised respondents with (51%) and without (49%) experience of CGS. The majority (89%) of respondents indicated they wanted some input into decision-making. The CA showed that 11 of the 13 attributes were positively associated with the ability to make an informed decision. The DCE indicated respondents favoured a CGS with some pre-consultation contact, short waiting times and one which would improve their ability to make an informed decision. Estimated WTP values were: service location (£2953; 95% CI: -779 to 15,110); degree of follow-up contact (-£1620; -7285 to 851); reduction (by one month) in waiting time (-£1014; -3656 to -567); having some pre-consultation contact (£8664; 2674 to 33029); improved ability to make an informed decision (£1698/unit increase on a 9-point scale; 609 to 7429).
Conclusion: This study supported that a hybrid stated preference experiment offers a practical solution to understanding preferences in complex situations.
Method: We modeled an individual who considered colorectal cancer screening as a means to increase his/her expected utility. Every year, the individual could choose between screening and not screening for disease. An individual who screened realized a large probability of no change in life-years (associated with a negative screening test, or positive screening test with late diagnosis), small probability of additional life-years (associated with early diagnosis), and small probability of screening test complications. An individual who did not screen realized no change in survival probabilities. We estimated expected utility by summing the probability distributions for life-years survived in each scenario, each multiplied by individual utility as a function of remaining life expectancy. We included a standard gamble to personalize utility for an individual’s degree of risk aversion. Using data from the Surveillance, Epidemiology, and End Results[SEER] survey between 2000-2011, we applied our model to screening using colonoscopy, flexible sigmoidoscopy, and fecal occult blood testing[FOBT]. We categorized individuals by age, race, and gender. We estimated expected utility for every possible combination of screening (method[colonoscopy, every 1-20 years; flexible sigmoidoscopy, every 1-10 years; FOBT, every 1-3 years], start age[20-85 years], stop age[20-85 years]) as compared with no screening, and rank-ordered results, to help understand how individual preferences impact screening decisions.
Result: For a 50-year-old average-risk white female who required a 2-month increase in life expectancy to be willing to accept the potential risks of colonoscopy, the model predicted that she would choose to undergo colonoscopy every 16 years, beginning at age 53. At higher risk aversion (4-month required increase in life expectancy), the model predicted that she would prefer flexible sigmoidoscopy every 10 years, beginning at age 55. Therefore, although informed individuals may screen less-often than guideline-recommended, they still should choose regular screening. FOBT was predicted only when survival probabilities were poorly understood, such as when mean population benefits were assumed to occur with certainty.
Conclusion: Quantitative models may help individualize colorectal cancer screening preferences. Future research should consider potential to improve patient adherence, and how to reduce the gap between individual-preferred vs. guideline-recommended frequency.
A wide range of studies in behavioural economics have established that the subjective magnitude of losses typically is greater than the corresponding magnitude of equal gains. Applied to health, if health state Y is better than X, movement X to Y should be associated with a smaller gain than the corresponding loss of Y to X. Furthermore, if X is better than Y on some health dimensions and worse on others, moving between X and Y would constitute both losses and gains. Contrary to the well-established preference-difference between losses and gains, QALY calculation from preference-based measures such as the EQ-5D is currently blind to the direction of change. We present a simple framework for implementing differentiated weighing for losses and gains in EQ-5D-based QALY calculation, and explore the practical and ethical consequences of implementation.
For the EQ-5D, interaction terms in the value algorithm complicate differentiated weighing of losses and gains. We propose conceptualizing movements between any two health states as going through either the worst or best combination of the two. Going form state X (53111) to state Y (34211) can be seen as going through either state W (54211, worst combination) or state B (33111, best combination). For the movement through W, we have the QALY loss of X to W multiplied by a loss-aversion factor, followed by the gain of W to Y. We propose taking the average of movements through the worst and the best state. Given loss aversion factor Z, the weighed delta of X to Y would then be ((W-X)Z+(Y-W) + (B-X) + (Y-B)Z)/2.
Implementing differential calculation of losses and gains in QALY-calculation corresponds to imposing a premium on dimension-wise health decrements from interventions, to changing patients’ health without improvement, and to variance in patient response.
The loss aversion factor would have to be determined empirically or normatively, but even small factors would likely influence cost-utility estimates, to the benefit of interventions with limited side-effects, variation in response, and patient risk. An informative analysis would be to vary the loss-aversion factor from 1 towards infinity to determine the point at which a suggested intervention is equal to its comparator. Applying differential weighing of losses and gains could improve decisions based on QALY analyses.
Method: We conducted interviews with 24 patients enrolled in the study assessing their preferences about end-of –life treatment choices. After providing information about their life expectancy and assessing the overall regret of potentially wrong choices (BMC Med Inform Decis Mak, 10, 51), we elicited the patients’ level of acceptable regret. We first assessed the patients’ tolerance for wrongly accepting hospice care and then measured the patients’ tolerance toward continuing unnecessary treatment. For the purposes of our study, a treatment was considered unnecessary if a patient dies within 6 months of the treatment. Accepting hospice care was considered a wrong decision if a patient survives longer than 6 months after the referral to hospice. We elicited acceptable regret levels to compute: 1) the probability of death above which a patient would tolerate wrongly accepting hospice care and 2) the probability of death below which the patient would tolerate unnecessary treatment (BMC Med Inform Decis Mak, 10, 51; Med Dec Making, 28, 540-553).
Result: We found that the median probability of death above which a decision maker would tolerate wrongly accepting hospice care is 98%, while the median probability of death below which a decision maker would tolerate unnecessary treatment is 4%. We also found that the levels of acceptable regret measured for wrong hospice referral (mean=1.68; SD=2.3; min=0; max=7.28) are similar to the levels of acceptable regret measured for unnecessary treatment (mean=1.27; SD=1.97; min=0; max=6.58) (KW test; p=0.73) indicating that acceptable regret levels for either of wrong decisions is felt similarly. Our results are independent of the estimated probability of death communicated to patients prior to the acceptable regret interview.
Conclusion: We have elicited preliminary empirical data that corroborated the acceptable regret theory. Our results may explain why has been so difficult to provide palliative care in the end of life setting.
To understand whether supplementing Case 1 (Object Case) best-worst scaling (BWS) data to quantify attitudes towards end-of-life care with response time data produced similar results or whether attitudes like “all life is sacred” merely evoke “fast, gut” responses of the Kahneman type.
1186 respondents aged 55+ in Australia answered two online discrete choice experiments, which logged response times. One DCE was a simple “accept/reject treatment” response to each end-of-life clinical scenario from a full factorial in 16 (4x2x2). The other was a Case 1 BWS study in 13 choice sets to quantify degree of agreement with 13 attitudes towards end-of-life care spanning concepts including “pro-life”, “pro-quality of life” and “control over decision-making”. Traditional logit-based BWS models of the choice data were compared with hierarchical Bayesian implementation of the best-worst Linear Ballistic Accumulator (LBA) models (2013 and 2014) which conceptualise the random utility model as a “horse race” type psychological process.
Certain divergencies arose from the two models. (1) the “considered response” (I would prefer a course of treatment that focused on extending life as much as possible, even if that meant more pain and discomfort) and the “gut response” (all human life is sacred) are approximately equally disliked in the choice data. However, (2) when adding the response times the “gut response” is disliked far more. Three DCE segments were found, (1) the largest, close to two thirds, virtually always rejected treatment, (2) the second, close to one third, switched answers depending on the attribute levels on offer, (3) the smallest (7-9%) virtually always wanted treatment.
DCEs to elicit advance care plans involving complex clinical scenarios are difficult. Case 1 BWS studies that successfully predict preferences from more general attitudes would help uptake of advances care planning. Since the DCE showed that the vast majority of Australians do not want life extension, attitudes that help distinguish those one third of Australians with “it depends” preferences are more helpful in advance care planning than ones that simply induce strong disagreement - with little to no predictive ability of preferences. This study provides strong quantitative evidence supporting a priori hypotheses the authors had concerning which attitudes are likely to be helpful in predicting preferences.