Category Reference for Presentations | |||||
---|---|---|---|---|---|
AHE | Applied Health Economics | DEC | Decision Psychology and Shared Decision Making | ||
HSP | Health Services, and Policy Research | MET | Quantitative Methods and Theoretical Developments |
* Candidate for the Lee B. Lusted Student Prize Competition
Method: Primary care physicians at Massachusetts General Hospital (MGH) have access to 39 PtDAs developed by the Informed Medical Decisions Foundation. Physicians can order the decision aids through the electronic medical record (EMR) and the PtDAs are mailed to patients at their home. We developed a 45-minute training session that addressed the integration of shared decision making in routine care, models for implementation of PtDAs, and practice- and provider- level data on PtDA usage. Each session also left time for discussion of best practices and implementation challenges specific to the practice. We scheduled training session with each primary care practice during regularly scheduled team meetings.
We examined the impact of training in a before – after study. We tracked PtDA orders through our EMR at each practice the 8 weeks prior to each session and compared that to the 8 weeks post. We examined changes in overall prescriptions and unique prescribers in the pre- and post- intervention periods using Wilcoxon signed rank test.
Result: We conducted sessions with 14/15 primary care practices and have complete, follow-up data on 11 practices. Almost 200 clinicians attended the sessions. The training was associated with a more than doubling of PtDA orders for the practices (mean orders: 29 in the 8 weeks pre- versus 73 in the 8 weeks post-training, p= 0.01). The training also led to an increased number of unique prescribers (mean number per practice: 6 pre-training versus 10 post-training, p=0.02). Data from practices that had early sessions suggests that the increases may be sustained over several months.
Conclusion: Clinician training resulted in a significant increase in the use of decision aids and the number of prescribers. Getting clinicians to use PtDAs regularly is an important step to improve quality of decisions at our center.
Method: Longitudinal, multi-site survey of breast cancer (BC) patients, with measurements at 1 month and 1 year after surgery. Patients completed the BC Surgery Decision Quality Instrument to assess knowledge, goals and involvement in decision-making. Total knowledge and involvement scores were scaled from 0-100% with higher scores indicating higher knowledge and involvement. We tested several hypotheses: (1) knowledge scores would decline (2) the decline would be greater for quantitative items than qualitative items, (3) involvement scores would decline and (4) goals (e.g. desire to keep breast, remove breast for peace of mind and avoid radiation) would become more aligned with choices over time. Changes in scores were examined using paired t-tests.
Result:
267/444 (60%) completed 1 month and 229/267 (86%) completed 1-year assessment. The mean total knowledge score did not change (69.2% (SD16.6%) 1-month versus 69.4% (SD17.7%) 1-year, p=0.86). Patients scored the same on quantitative items (61.3% 1-month versus 59.8% 1-year, p=0.65). The reports of involvement did not change (66.8% (SD24.7%) versus 65.2% (SD26.1%), p=0.31). Only one of the goals, avoid radiation, changed significantly and became less important for all patients over time (for lumpectomy patients, mean difference=-0.44, p=0.03 and for mastectomy patients, mean difference=-0.93, p=0.05). Despite the minimal change in mean scores, many respondents had knowledge scores (116/229, 50%) or involvement scores (163/229, 70%) change by >10% with increases balancing out decreases.
Conclusion: Contrary to our hypotheses, we did not find differences in the mean scores for knowledge, involvement or goals. For population-level assessments, it may be reasonable to survey BC patients up to a year after the decision, greatly increasing feasibility of measurement. However, scores changed markedly for many at the individual-level and additional studies are needed to examine factors associated with these changes and to confirm these results for other decisions.
Visual displays can help patients to understand medical information, but may not be as useful to patients with low graph literacy. These patients may not have enough knowledge about the properties of different kinds of displays and procedures for interpreting them. However existing measures of graph literacy are either too difficult or too long for everyday medical research and practice. Here we describe a short test of graph literacy that fills this gap.
Method:
The study consisted of two phases. In the first phase, 51 participants in a laboratory setting completed 13 items included in a longer graph literacy scale developed by Galesic & Garcia-Retamero (2011), as well as 16 items involving complex visual displays. All items had health-related content. The complex displays included spatial features, such as height of bars, that were incongruent with the information conveyed by conventional features, such as titles, labels, and scales. Based on the results of the first phase, we selected 4 of the initial 13 items for the short graph literacy scale. In the second phase, we used these 4 items to predict accuracy of understanding of different types of graphs in a study conducted on nationally representative samples of people 25 to 69 years of age in Germany (n= 495) and the United States (n= 492).
Result:
In the first phase in the laboratory, we analyzed correlations of each of the 13 original graph literacy scale items with the total score on the 16 complex displays, and selected 4 items that correlated with the total score most highly and predicted it independently of numeracy skills. Each of these items included a different type of display: bar, line, pie chart, and icon array. In the second phase on the nationally representative samples, we found that these 4 items were as successful as the longer 13-item scale in predicting accuracy of understanding of different types of graphs displaying medical information.
Conclusion:
The new 4-item scale is fast and psychometrically valid method for measuring graph literacy. The scale can be used in both medical research and practice to test whether different visual displays can be understood by patients with low graph literacy skills.
Method: MEDLINE, CINAHL, and PsychInfo databases were searched up to April 2013. Articles were included if randomized studies compared formats of health risk statistics and measured the outcome of knowledge and interpretation of the data.
Result: Of 1682 publications, 16 were eligible and 7 were included in the final analysis. Two types of knowledge outcomes were identified: (1) verbatim (the ability to interpret numerical values) (2) gist (the ability to choose the lowest-risk treatment option). These papers tested 5 formats – simple numbers, pie graph, horizontal bar graph, vertical bar graph, and systematic pictographs (displaying risk statistics in groups of shaded icons). Systematic pictographs resulted in better comprehension than simple numbers judged by either verbatim (OR 0.52; 95% CI 0.46 to 0.58, p<0.01) or gist knowledge (OR 1.38; 95% 1.25 to 1.53, p<0.01). On the other hand pie graphs resulted in lower verbatim and gist knowledge when compared to the use of simple numbers. Pictographs result in better comprehension than pie graphs in verbatim knowledge (OR 0.24; 95% 0.13 to 0.44, p <0.01).
Conclusion: Systematic pictographs are more effective formats for presenting health risk information to adults. Pie graphs appear to be ineffective in communicating risk information.
Methods: We recruited 1819 adults from a demographically-diverse Internet panel and asked them to imagine they were diagnosed with Type 2 diabetes, had been maintaining good blood glucose control, and were now viewing the results of a set of blood tests (CBC, Hemoglobin A1c, and renal panel) in an online portal in between doctor’s visits. Following the format currently implemented in the patient portal of a major academic medical center, all tables showed test values, standard ranges, and units but did not show indicators for high or low values. We randomly varied whether the patient’s A1c was either 7.1% or 8.4%. We then assessed whether participants would recognize that the A1c value was out-of-range and what they would do about it. We also measured their numeracy and health literacy.
Results: Compared to more numerate participants, less numerate participants were significantly less able to identify that the A1c value reported in the standard test result tables was above the standard range. Among those scoring in the lowest tertile of subjective numeracy, 39% correctly identified the out-of-range result, versus 62% in the highest tertile (p<0.001). This effect persisted even after controlling for health literacy. Furthermore, less numerate participants’ intentions to call their doctor were not significantly influenced by whether the A1c value was 8.4% vs. 7.1%, while highly numerate participants adjusted their likelihood of calling their doctor’s office based on the test result.
Conclusions: Current tabular formats for providing test results to patients (e.g., through patient portals to electronic record systems) appear unable to meet these patients’ informational needs to even a minimal standard. Such failures undermine the value of providing test data. Our results document the need to develop test result displays that are intuitively meaningful, even to less numerate patients.