LEARNING EFFECTS IN EQ-5D TIME TRADE-OFF VALUATIONS

Sunday, October 24, 2010
Sheraton Hall E/F (Sheraton Centre Toronto Hotel)
Liv Ariane Augestad, MD1, Kim Rand-Hendriksen, Cand.Psychol1, Ivar Sønbø Kristiansen, MD, PhD2 and Knut Stavem1, (1)Akershus University Hospital, Lørenskog, Norway, (2)University of Oslo, Oslo, Norway

Purpose: We wanted to assess if there were potential learning effects in valuations of hypothetical EQ-5D health states when using the Time-Trade-Off (TTO). We use the term learning effects in a broad sense, reflecting any systematic effect of accumulating experience with the valuation task, i.e. if the valuations were affected by the number of previously rated health states.

Method: We analyzed material from the US EQ-5D valuation study, using identical exclusion criteria, after which 3773 respondents were included in the analysis. Each respondent assessed 13 states in random order. Using OLS regression, we calculated the effect of the number of previously rated states on the mean of all valuations across all health states. Second, we tested for differences in learning effects for valuations better and worse than death. Third, we measured the proportion of responses in the vicinity of death using several different cut-off values to measure potential learning effects on discontinuities of preferences around dead.

Result: (All presented statistics were significant at p<.001 level).The number of previously presented states had an effect on mean values across all health states with a decline of -.113 utilities over the 13 valuations. Decomposing valuations into better or worse than death, we found that this decline was due to a decrease in the worse-than death values by 0.21 in total, a drop of on average .016 per valuation. In contrast, the better-than-death values display a slight increase of .026 utilities over the 13 states. The proportion of respondents valuing health states in the vicinity of death decreased significantly for all chosen cut-offs, increasing the previously known gap-effect.

Conclusion: The analyses suggest that there were stable learning effects across the valuation of the 13 states. These effects do not bias specific health state measurements differently, due to the randomized valuation order. However, they reduce the overall validity and reliability of the measures, as the health state measures would be different given a different number of states per respondent. Qualitative research is needed to assess why these effects occur and what the consequences are for the validity of health state measurements and the appropriate number of states per respondent in future studies.