G-6 ATTRIBUTE PROCESSING IN CHOICE EXPERIMENTS: A METHODOLOGICAL EXPLORATION

Tuesday, October 25, 2011: 11:15 AM
Grand Ballroom EF (Hyatt Regency Chicago)
(DEC) Decision Psychology and Shared Decision Making

Kirsten Howard, PhD and Glenn P. Salkeld, PhD, The University of Sydney, Sydney, Australia

Purpose: In analysing DCE, we typically assume that individual respondents evaluate each and every attribute offered in each alternative, and choose their most preferred. This study explores the effect of respondent attribute processing, using ‘attribute importance’, on parameter estimation, model fit and marginal rates of substitution (MRS), in colorectal cancer screening

Methods:   The survey, a fractional factorial design of a two-alternative, unlabeled experiment, was mailed to a sample of 1920 subjects in NSW, Australia.  Attributes included: test accuracy for cancer and for large polyps, false positive rate, cost, dietary & medication restrictions and sample collection. The importance of each attributes was assessed using a Likert scale, where 1= very important and 5 = not important at all, dichotomised for analysis (1-2 = important, 3-5= not important/neutral).  Two analyses were conducted where it was assumed that 1) all attributes are attended to and influence choices (usual analysis practice); and 2) attributes were stratified by their importance on the Likert scale, using interaction terms to indicate whether attributes were important, or not.  Mixed logit models were used to estimate preferences. 

Results:   1152 from 1920 surveys (60%) were returned.  Both choice models significantly predicted respondent test preferences.  In comparing models, Model 2 was significantly better than Model 1 (chi-square equal to 485.4 (with 6 degrees of freedom, p<0.00001).  There was also an improvement in McFadden’s pseudo R2 with Model 2; the reduction in AIC moving from Model 1 to Model 2 indicated that this improvement remained even after penalising for the loss of parsimonious specification.  Respondents who reported the attribute was important to them had significantly higher parameter estimates compared to those who considered the attribute not important or neutral.  This was consistent across all attributes, and also resulted in significant differences in MRS and WTP.

Conclusions:  Rather than assuming all attributes are equally attended to by respondents, our analysis suggests that taking account of respondent-reported attribute importance (as a proxy for attribute processing) may result in models that better explain respondent’s choice behaviour and preferences.  This issue and other attribute processing strategies should be further explored in different settings and data sets