4L COMPARATIVE EFFECTIVENESS RESEARCH

Tuesday, October 21, 2014: 3:30 PM - 5:00 PM

* Finalists for the Lee B. Lusted Student Prize

3:30 PM
4L-1

Peter H. Schwartz, MD, PhD1, Susan M. Perkins, PhD1, Karen K. Schmidt, RN1, Paul F. Muriello, BA1, Sandra Althouse, MS1 and Susan M. Rawl, RN, PhD2, (1)Indiana University School of Medicine, Indianapolis, IN, (2)Indiana University School of Nursing, Indianapolis, IN

Purpose:  Individuals eligible for colorectal cancer (CRC) screening can choose from multiple approved tests, including colonoscopy and stool testing. The existence of multiple options allows patients to choose a preferred strategy but also may lead to indecision and delay.  Behavioral economics suggests presenting one option as a default choice, i.e. the one that patients should receive if they do not wish to decide.  We conducted a randomized trial to measure the impact of describing stool testing as the default option for CRC screening in a decision aid (DA).

 

Methods:   105 patients, aged 50-75 years, who were at average risk for CRC and due for screening were recruited from primary care clinics.  All subjects viewed a CRC screening DA and half (n=53) were randomized to view, in addition, a description of stool testing with the fecal immunochemical test (FIT) as the default choice.  Participants completed questionnaires before (T0) and after (T1) viewing the DA that assessed perceived CRC risk, intended screening behavior, intent to be screened (overall, with FIT, or with colonoscopy) and decisional conflict. At six months (T2), screening behavior was assessed.

Results:   Members of both groups showed significant increases in perceived CRC risk, intent to be screened, intent to undergo FIT, and reduction in overall decisional conflict scores (all p < 0.0001). No significant between-group differences in these change scores were observed.  Intent to undergo colonoscopy increased in the Control group and decreased in the Default group, with a significant between-group difference in change scores (0.21 vs. -.09, p=.03).

Intended screening behavior at T0 and T1 is shown in the Table. The percentage of patients who did not intend to be screened declined in both groups taken together (p=.0002), though reductions did not differ significantly between the groups (Control 17.3% vs. Default 20.8%).  Among patients who did not choose FIT at T0, a significantly higher percentage switched to FIT at T1 in the Default group compared to Controls (43% vs. 21%, p=.02).

At six months, the percentage of patients who underwent screening with FIT or colonoscopy did not significantly differ by group.

Conclusion:   Presenting stool testing as the default choice for CRC screening significantly reduced patients' intent to undergo colonoscopy and significantly increased the proportion intending to be screened with FIT.

3:45 PM
4L-2

Inge M.C.M. de Kok, PhD1, Femme Harinck, MD2, Ingrid C.A.W. Konings, MD2, Iris Lansdorp-Vogelaar, PhD3, Paul Fockens, MD, PhD4, Marco J. Bruno, MD, PhD2, Marjolein van Ballegooijen, MD, PhD1 and Sonja Kroep, MSc5, (1)Department of Public Health, Erasmus MC, University Medical Center Rotterdam, Rotterdam, Netherlands, (2)Department of Gastroenterology & Hepatology, Erasmus MC, University Medical Center Rotterdam, Rotterdam, Netherlands, (3)Erasmus MC, University Medical Center, Rotterdam, Netherlands, (4)Department of Gastroenterology and Hepatology, Academic Medical Center, University of Amsterdam, Amsterdam, Netherlands, (5)Erasmus MC, University Medical Center, Department of Public Health, Rotterdam, Netherlands

Purpose: To explore the uncertainties of early detection of pancreatic cancer in high-risk individuals and consequently highlight the areas to which further research should be directed.

Method: Effects of pancreatic cancer screening were estimated using the Microsimulation Screening Analysis (MISCAN)-model. The majority of the model assumptions were based on preliminary data from the screening trials, the recommendations as stated in the consensus paper of the international Cancer of the Pancreas Screening (CAPS)-consortium, and  expert opinions. We varied dwell times of the different health states and test-characteristics of the screening test. We considered different screening ages and intervals and varied follow-up strategies after a positive screening.

Result: Mortality reduction (MR) was 35% and 58% (436 and 722 cases per 10,000 persons) for 5-yearly and annual screening at ages 50 to 75, respectively. For 5 yearly screening, the number needed to screen (NNS) was 117.1 and number needed to treat (NNT) was 2.5 to prevent one cancer death (see Figure (the order (from left to right) of the variations in the factors (between brackets) corresponds to the order (from left to right) in the figure. The dashed lines represent the results of the base case analyses)). The NNT was lowest in case all screen positives with preinvasive stage 3 or cancer are treated (2.4, MR 32%). If only persons are treated who are already in an invasive stage of disease, the NNT was 5.3 (MR 10%). Results were sensitive for pancreatic cancer risk (risk doubled: NNS 64.3, NNT 2.7, MR 38%) and duration of the preclinical stage of the disease (increased to 30 years: NNS 92.6, NNT 3.2, MR 46%). Results were less sensitive for test characteristics.

Conclusion: Modeling shows that there is potential for pancreatic screening to be cost-effective in high-risk individuals. Follow-up strategy of screen positives and duration of the preclinical stage have the highest impact on the outcome of pancreatic cancer screening, as is inclusion of patient populations that are exposed to a certain risk to develop pancreatic cancer.

4:00 PM
4L-3

Olivia Phung, PharmD1, Raynold Yin, PharmD1 and Swapnil Rajpathak, MD2, (1)Western University of Health Sciences, Pomona, CA, (2)Merck, Whitehouse, NJ
Purpose: In patients with type 2 diabetes mellitus (T2DM), long term maintenance of glycemic control requires appropriate treatment intensification, according to clinical guidelines.  However, clinicians often hesitate to intensify treatment, often called clinical inertia, for a variety of reasons. This may contribute to poor health outcomes, especially when combined with other factors, such as patient non-adherence. We seek to evaluate the prevalence of clinical inertia, as measured by the rate of treatment intensification.

Methods: Systematic literature search was conducted using in PubMed, Embase, and Cochrane Central through 02/2014 using terms for type 2 diabetes, treatment intensification, and clinical inertia. Studies were included if they were observational studies (cohort or cross-sectional studies) in patients with type 2 diabetes and reported the rates of treatment intensification.

Results: A total of 18 studies (n=43,305) representing both prospective and retrospective cohort designs, following patients for a median of 12 months (range 6 to 24 months) were identified. In these studies, the indication for treatment intensification was defined by the study and varied from A1c>7% to A1c>8%. Treatment intensification given to patients varied from the initiation of pharmacotherapy in patients only on diet and exercise, dose increase of existing medications, or the addition of another medication. Two studies evaluating numbers of patient encounters at their primary care provider found that 26.7 to 36% of visits resulted in treatment intensification when patients were indicated. Of the remaining 16 studies which looked at the numbers of patients who require additional therapy, a median of 49.3% (range 7 to 74.6%) of patients received intensified therapy. A few studies further evaluated potential predictors of treatment intensification and found that intensified therapy was more likely when patients had higher baseline A1c (7 studies), were younger (2 studies), had other medications for co-morbid conditions (2 studies), and had higher adherence rates to their current regimens (2 studies).

Conclusions: In real-world settings, a significant proportion of eligible patients fail to receive the treatment intensification based on clinical guidelines. Further research is needed to determine provider beliefs and attitudes regarding treatment intensification.

4:15 PM
4L-4

Karen M. Kuntz, ScD, University of Minnesota, Minneapolis, MN, Carolyn Rutter, PhD, Group Health Research Institute, Seattle, WA, Amy Knudsen, PhD, MGH Institute for Technology Assessment, Boston, MA, Chester Pabiniak, MS, Group Health Research Organization, Seattle, WA, Samuel Lesko, MD, MPH, Northeast Regional Cancer Institute, Scranton, PA and Ann G. Zauber, PhD, Memorial Sloan-Kettering Cancer Center, New York, NY
Purpose: In northeast Pennsylvania (a six-county area surrounding Scranton and Wilkes-Barre), both colorectal cancer (CRC) incidence and mortality are approximately 25% higher than the corresponding US rates. The high incidence of CRC in the community has resulted in local clinicians recommending repeat colonoscopy (COL) at more frequent intervals, rather than the recommended 10-year interval. Working with a key stakeholder in that region, we sought to determine if more frequent colonoscopy screening is cost-effective for a higher-risk population.

Method: We used two microsimulation models that were developed as part of the Cancer Intervention and Surveillance Modeling Network (CISNET), a consortium of cancer modelers funded by the National Cancer Institute.  We simulated the outcomes of a cohort of 50-year-old hypothetical individuals undergoing COL under different risk assumptions (average risk, higher risk) and screening scenarios (15-yearly COL, 10-yearly COL, 7-yearly COL, all starting at age 50). We calculated life years saved (LYS) and the additional cost to increase the number of lifetime screening colonoscopies by one (recommendation is three screens per lifetime at ages 50, 60, 70). If adenomas are found and removed at screening, the individual is then followed with COL surveillance per guidelines until the age of 85.

Result: For an average-risk cohort, a 10-yearly COL strategy (i.e., COLs at ages 50, 60, 70) resulted in 4-6 additional life years (range reflects two models) and an added $524,000-$671,000 per 1000 screened compared with a 15-yearly COL strategy (i.e., COLs at ages 50, 65), resulting in an incremental cost-effectiveness ratio (ICER) of $98,000-$153,000 per LYS.  Adding an additional lifetime screen (i.e., screening every seven years at ages 50, 57, 64, 71) resulted in an additional 2 years of life and $412,000-$1,399,000 per 1000 screened compared with 10-yearly strategy, resulting in an ICER of $348,000-$579,000 per LYS. Repeating this analysis among a population at an increased risk of CRC (by 25%) reduced the ICERs to $73,000-$123,000 per LYS for 10-yearly vs. 15-yearly COL, and to $295,000-$468,000 per LYS for 7-yearly COL vs. 10-yearly COL.

Conclusion: The ICER for 7-yearly COL in the higher-risk population is much greater than the ICER for 10-yearly COL in the average population. For a population at 25% increased risk of CRC, screening with COL more often than 10-year intervals in not a good use of resources.

4:30 PM
4L-5

Alice Mitchell, MD, MS1, Andreia Alexander, MD1, Chris Moore, MD2 and Jeffrey Kline, MD1, (1)Indiana University School of Medicine, Indianapolis, IN, (2)Yale University School of Medicine, New Haven, CT
Purpose: We compared the gender differences in diagnostic yield of CT of the pulmonary arteries (CTPA) and, the diagnostic accuracy and effect of 2 validated clinical decision rules (CDRs) for PE that are widely used in the emergency care setting. 

Method: We compared the Pulmonary Embolism Rule-Out Criteria (PERC) and Modified Well’s Rule (Well’s) in women and men in a prospective, multi-center cohort with symptoms suggestive of PE.  Clinical evaluation and diagnostic testing was at the treating physician’s discretion.  Notably, patient characteristics, including age and comorbidites, were similar in both genders and reflected the heterogeneous population characteristic to the emergency care setting.  The presence or absence of CDR criteria were identified by the treating physician, at the time of evaluation, prior to PE testing.  Diagnostic yield was defined as the proportion of CTPA studies demonstrating a PE. Diagnostic accuracy was defined by the 45-day incidence of Venous Thromboembolism (VTE, PE and/or deep venous thrombosis[DVT]; blinded, adjudicated outcome using explicit criteria) and expressed as likelihood ratios (LR+ and LR-). The effect of CDRs was defined as the rate of CTPA in CDR-negative (PERC negative, Well’s score <2) patients

Result: We followed a total of 8774 patients (66% women) evaluated for PE. The overall rate of CTPA imaging was higher in women, resulting in a lower diagnostic yield: 21% in men compared to 10% in women (p<0.01). The 90-day prevalence of VTE was higher in men (9% men vs. 5% women, p<0.01). The diagnostic accuracies of PERC and Well’s did not differ (see table). There was no gender difference in the rate of CTPA imaging in either the PERC-negative (3% male and 6% female, p=0.29) or Well’s-negative (9% male and 9% female, p=0.28) groups.

 

Men

Women

p-value

PERC

 

 

 

LR+

1.3

1.3

0.96

LR-

0.17

0.25

0.92

Modified Well’s Rule

 

 

 

LR+

2.4

2.3

0.99

LR-

0.41

0.44

0.97

Conclusion: Despite the successful integration of CDRs in the emergency care setting, the diagnostic yield of CTPA in women remains about half that of men even though the diagnostic accuracy and effect of CDRs does not differ by gender.  Given that the risk of cancer following radiation exposure from CTPA in women is approximately double that of men, improved methods to specifically risk-stratify women evaluated for PE are needed.

4:45 PM

Josephine F Wong, MD1, Valeria E. Rac, MD PhD1, Nicholas Mitsakakis, MSc PhD1, Eva Haratsidis, BSc, RN2 and Murray D. Krahn, MD, MSc1, (1)Toronto Health Economics and Technology Assessment (THETA) Collaborative, University of Toronto, Toronto, ON, Canada, (2)Ontario Association of Community Care Access Centres, Toronto, ON, Canada
<>Purpose:

   A retrospective critical reflection on the challenges encountered during the implementation of the community-based Wound Interdisciplinary Team (WIT) study.

  <>Method:

   The Wound Interdisciplinary Team (WIT) study is a two-arm pragmatic randomized controlled trial evaluating the effectiveness and cost effectiveness of a systematic referral process to improve primary care access to multidisciplinary wound care teams (MDWCTs) in Toronto from May 2011 to May 2013.

   We developed a critical reflection framework to identify and understand the implementation challenges and their impact on the trial's outcomes using different instruments/tools including field notes from direct observations, informal discussions, semi-structured interviews and client satisfaction questionnaire (CSQ8 ScaleŽ).

      We evaluated different implementation processes, such as recruitment, informed consent, target population sampling, systematic referral to MDWCTs, and appointment scheduling and attendance with the MDWCTs. The identified challenges were further verified with a review of the study protocol and the community practice and correlated with the quantitative results.

  <>Result:

   Four hundred and fifty-one patients referred for community-based wound care were enrolled; 225 patients were allocated to the control arm and 226 patients to the intervention arm.

   There were multiple challenges that might have affected the study outcomes. While some challenges were anticipated and addressed, others only became apparent during the course of the study. Anticipated challenges included study integration into the community practice setting and training of community members unfamiliar with research ethics and procedures. Unanticipated challenges were mainly context- or study design-driven.  Complex logistics for subject enrollment and data collection constituted major barriers. Issues related to the pragmatic trial design reduced the number of subjects eligible for intervention and potentially reduced the impact of intervention.

   For future community-based healthcare research, we strongly recommend early involvement and collaboration with frontline healthcare professionals, simplification of logistics, piloting the study, pre-trial modeling, and mixed methods design for process evaluation.

  <>Conclusion:

   The implementation of community-based pragmatic trials in wound care is complex. Early community engagement and thorough understanding of community capacity is crucial. The lessons learned will contribute to the field of community-based health care research.