Monday, October 24, 2016: 10:00 AM - 11:00 AM
Bayshore Ballroom Salon D, Lobby Level (Westin Bayshore Vancouver)

Mark Helfand, MD, MPH
Portland VA Medical Center and Oregon Health & Science University
SMDM President, Staff Physician and Professor of Medicine

10:00 AM

Christopher R. Wolfe, Ph.D.1, Richard Smith, M.A.1 and Valerie Reyna, PhD2, (1)Miami University, Oxford, OH, (2)Cornell University, Ithaca, NY
Purpose: To understand the conditions under which laypeople ignore irrelevant differences and appropriately judge two breast-cancer statistics as approximately equal (e.g. nominally different rates of breast cancer), an essential judgment in medical decision-making.

Method: Three studies investigated how people make “roughly the same” or “approximately equal” judgments. Participants judged the smaller of two numbers as either less than or approximately equal to the larger number. Number pairs were presented in sentences regarding breast-cancer statistics and as numbers only, displayed either as percentages (to reduce denominator neglect) or as frequencies (X in 100,000). We used reputable health sources to designate half of the breast-cancer items as justifiably approximately equal and half unambiguously different. This allowed us to conduct Signal Detection Theory (SDT) analyses of the factors affecting judgment sensitivity and bias. We investigated numerical properties and a gist-evoking manipulation based on Fuzzy-Trace Theory (FTT). In Study 1, number pairs, devoid of medical context, were presented to 355 participants. In Study 2, 150 participants made approximately equal judgments for numbers out-of-context or the same numbers as breast-cancer statistics, and completed individual-difference measures.  In Study 3, following FTT, half of 229 participants received a brief gist-evoking text (“Thinking about the gist of what you know, is this a substantial difference?"), judged numbers out-of-context or breast-cancer statistics, and completed individual-difference measures.

Result: In Study 1, the greater the ratio of the smaller to larger number, the greater the proportion judged approximately equal. In Study 2, as predicted by FTT, participants were more likely to appropriately judge quantities as approximately equal in the context of breast cancer than numbers alone and when data were presented as percentages compared to frequencies. Mean d’ (sensitivity) was significantly larger for breast-cancer items presented as percentages than those presented as frequencies. Knowledge of breast cancer correlated with d’. In Study 3, Study 2 results replicated and receiving gist-evoking text yielded significantly more appropriate judgments. SDT analyses indicated greater sensitivity (d’) in the gist text condition and, again, d’ and knowledge were correlated.

Conclusion: Laypeople’s ability to ignore irrelevant differences is important for medical decision-making. Instructions to think about gist, presenting numbers as percentages rather than frequencies, and breast-cancer knowledge improved judgments of approximate (fuzzy) equality. FTT explains our results, with implications for shared decision-making.

10:15 AM

Ilya Ivlev, M.D., Ph.D.1, Erin N. Hickman, M.D.1, Marian S. McDonagh, Pharm.D.2 and Karen B. Eden, Ph.D.2, (1)Department of Medical Informatics & Clinical Epidemiology, Oregon Health & Science University, Portland, OR, (2)Pacific Northwest Evidence-Based Practice Center, Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, OR

Purpose:  The World Health Organization, U.S. Preventive Services Task Force (USPSTF), and American Cancer Society (ACS) state that decisions regarding breast cancer screening should be tailored to the values of a patient. The purpose of this systematic review was to answer the following questions: (1) how do patient decision aids (PtDAs) change women's intention to undergo screening mammography; (2) do women from different age groups (38-49, and older than 69) have a similar screening intention after use of a breast cancer screening PtDA?

Method:  A search for evidence was performed using the following databases: MEDLINE (Ovid), PsycINFO, Health and Psychosocial Instruments, CENTRAL, Cochrane Methodology Register, Database of Abstracts of Reviews of Effects, Health Technology Assessment, and PsycARTICLES. All potentially eligible articles were reviewed individually by two researchers for inclusion/exclusion, extraction of data, and risk of bias assessment.  When disagreement occurred, a third author was consulted, and the topic was discussed until consensus was reached. The proportions of women that intended to be screened, not intended to be screened, or were unsure about undergoing breast cancer screening were pooled using random effects model meta-analysis techniques according to published standards.  

Result:  We found 58 potentially eligible papers; three RCTs and three before-after studies met inclusion criteria and provided screening intention data for 2,040 women. In trials, compared to usual care, PtDAs resulted in significantly more women not intending to undergo screening mammography, relative risk (RR) 1.48 (95% CI 1.04 to 2.13, n=3).  Also, more women age 38-49 years old with average breast cancer risk did not intend to be screened after using a PtDA (RR 1.48; 95% CI 1.02 to 2.14, n=4). In the two studies of women age 69-89 years, PtDAs did not affect intention to continue breast cancer screening.

Conclusion:  Use of a patient decision aid reduced younger women's intention to be screened for breast cancer before age 50.  The topic of using PtDAs to help women make informed decisions about mammography screening is relatively new (included studies were published between 2007-2015) and more work is needed to fully understand the behavior change in screening after using an evidence-based decision aid.


10:30 AM

Brian J. Zikmund-Fisher, PhD1, Aaron Scherer, PhD2, Holly O. Witteman, PhD3, Jacob Solomon, PhD1, Nicole L. Exe, MPH1 and Angela Fagerlin, PhD4, (1)University of Michigan, Ann Arbor, MI, (2)University of Iowa, Iowa City, IA, (3)Laval University, Quebec City, QC, Canada, (4)University of Utah, Department of Population Health Sciences, Salt Lake City, UT

Purpose: Patients can increasingly view laboratory test results directly via patient portals of electronic health record systems. Such systems typically only provide patients with one reference point (the "standard range") to use to interpret such hard-to-evaluate data. We tested novel visual displays designed to help patients who receive out-of-range test values to know how alarmed they should feel.

Methods: We conducted an online survey experiment using a demographically diverse panel in which participants viewed multiple hypothetical laboratory test results via an online portal. One group viewed these results on number line displays that were grey except for a green range labeled "standard range." The second group saw the same display that additionally included a harm anchor point labeled "many doctors are not concerned until here" (see Figure). Three results (platelet count, ALT, and serum creatinine) were initially reported as slightly outside of the standard range, and then participants viewed a second (similar) display showing a more extreme test result (i.e., one further from the standard range). For each test result, we measured participants' alarm about their results, their perceptions of urgency, and specific behavioral intentions. Participants also provided preference ratings of the display format.

Results: Providing the harm anchor reference point on the visual display significantly reduced both perceived alarm and perceived urgency of the close-to-normal ALT and creatinine results (all p's<0.001). Use of the harm anchor labels also reduced the number of participants who wanted to contact their doctor urgently or go to the hospital (ALT: 35% vs. 56%, p<.001; Creatinine: 35% vs. 57%, p<.001). The differences in perceived alarm and urgency disappeared when the test results were more extreme, indicating significantly greater sensitivity to variations in test result with the harm anchor labels. No significant differences were observed regarding the platelet count results. Participants had a small but significant preference for the harm anchor displays (p=0.02).

Conclusions: Presenting patients with cues regarding when test results become clinically concerning can sometimes minimize concern about out-of-range results that are not of immediate clinical concern. Providing harm anchor labels may be useful when designing patient-facing displays of health data in order to increase the signal specificity of such communications and to minimize patient requests for urgent appointments when routine follow-up would be sufficient.

10:45 AM

Azza Shoaibi1, Brian Neelon2 and Leslie Lenert, MD2, (1)Medical University of South Carolina, charleston, SC, (2)CHARLESTON, SC
Purpose: Conjoint analysis (CA) is a preference elicitation method used to characterize the utility that individuals ascribe to dimensions of care. An aspect of CA of increasing importance is market segmentation—that is, finding different subgroups in the population in order to tailor options to group-level preferences. Segmentation analyses typically adopt a two-step process that may lead to biased inferences. We present a novel approach optimized specifically to accurately identify relevant population segments for shared decision making.

Method: We extended the classic CA model by embedding the model within a Bayesian nonparametric mixture framework that can accurately predict individual-level utilities and optimally cluster patients in a single step. A stick-breaking Dirichlet process (DP) was used to define an infinite CA mixture model. We extended the DP model to incorporate covariate information not only in the cluster means but also in the cluster probabilities by adopting a probit stick-breaking process (PSBP).

We conducted extensive simulation studies to evaluate model performance, assess Markov chain Monte Carlo (MCMC) convergence, and compare our approach to existing methods for CA. We implemented the simulations by generating multiple datasets under the assumed model (e.g., a CA mixture model with a fixed number of clusters), and on each dataset, we computed estimates of parameters and other quantities of interest. We quantified the model’s performance in terms of parameter bias, mean square error (MSE), and coverage probabilities, all summarized as Monte Carlo averages over the simulated datasets. We used the RAND and Jaccard similarity indices to compare the true and model-predicted cluster allocations. We also compared our method to existing two-stage approaches (a hierarchical Bayes model for generating individual partworth predictions followed by a latent class analysis).

Results:   To date, we have derived mathematical results that provide the theoretical foundation for our approach. Simulation studies are currently underway to test and validate the method. These simulations indicate successful and efficient convergence of the model, with similarity indices indicating accurate clustering. In addition, initial results show that the developed model improved true classification by approximately 20% when compared to conventional two-stage methods.  

Conclusions:  Integrating CA and segmentation is a promising approach for measuring patient preferences and tailoring treatment recommendations to patient subpopulatio