* Candidate for the Lee B. Lusted Student Prize Competition
Purpose: To compare the predictive validity of national guidelines for management of unstable angina, and to determine whether physician judgments have higher predictive validity beyond practice guidelines.
Method: Emergency-room patients presenting with non-traumatic chest pain/pressure (N=423) were classified as “low,” “intermediate,” or “high” cardiac risk for myocardial infarction (MI) or coronary artery disease (CAD) according to the American Heart Association/American College of Cardiology guidelines (1994, 2000, 2007). Physicians’ triage decisions (discharge/low risk, ward or monitored bed/medium risk, or cardiac intensive care/high risk) were assessed for guideline adherence and predictive validity of cardiac outcomes within 1 year. Sample was 51% female, mean age 52 (SD = 19); 73% White/Non-Hispanic, 18% Hispanic, and 9% other; 42% returned to the hospital for a cardiac procedure (e.g., percutaneous transluminal coronary angioplasty or coronary artery bypass graft) and/or received a cardiac diagnosis (e.g., acute MI, unstable angina, stable angina).
Result: Hierarchical logistic regression analyses were conducted entering demographics in block 1, each guideline seriatim in blocks 2-4, and physician assessment in block 5 as predictors; aggregate cardiac outcome (any procedure or cardiac diagnosis within one year) was the dependent measure. In the full model, age was a significant predictor (OR= 1.03, SE=.01,p<.01); 1994 guideline risk assessment was a significant predictor (OR=2.12, SE=.18, p<.001); physician assessment added unique variance for cardiac events (OR=3.42, SE=.21, p<.001). Results were similar for judgments of MI risk and CAD probability, and for procedures and diagnoses analyzed separately. Newer guidelines did not improve on older guidelines; indeed, they predicted outcomes less well.
Conclusion: Older guidelines demonstrated greater predictive validity than newer guidelines for cardiac outcomes, but physician assessment was overall the strongest predictor, controlling for all other factors. Results indicate that changes in guidelines should be empirically evaluated before widespread implementation. Further, implications for health care reform include a necessary role for physician assessment, in addition to the application of practice guidelines, in order to achieve quality of care.
Purpose: Pulmonary embolism is a frequently missed diagnosis. The goal of the present study is to examine whether the evidence-based decisions rules to diagnose pulmonary embolism are correctly applied in clinical practice.
Method: Physicians included 247 dyspnea patients in the study. Directly after the first examination, the physicians indicated the differential diagnoses and the likelihood of those diagnoses. After the patient was discharged from the hospital, their patient records were reviewed by experts. The cases for which the physicians suspected pulmonary embolism were selected for further analysis. The diagnostic process of those cases was compared to the evidence-based decisions rules for diagnosing pulmonary embolism which are based on the Wells score. In addition, 16 interviews were conducted with physicians who did not follow the evidence-based decision rules to obtain information on the reasons why the decision rules were not applied.
Result: The results showed that in 80 out of 247 cases the physician suspected pulmonary embolism. The evidence-based decision criteria were correctly applied in 17 out of the 80 cases. In 36 cases unnecessary tests were performed to diagnose pulmonary embolism (i.e. CTa or D-dimer). While in 39 cases pulmonary embolism was not sufficiently examined, meaning that pulmonary embolism could have been missed. When the physicians were asked about their decisions they indicated that they did not want to expose the patient to the radiation of a CTa or they considered another diagnosis to be more likely assuming the patient did not have pulmonary embolism as well. Therefore they decided not to examine the patient more extensively.
Conclusion: The evidence-based decision rules are not always correctly applied in clinical practice. In a substantial number of cases in which pulmonary embolism was suspected either no diagnostic tests were performed or unnecessary diagnostic tests took place. The physicians tended to overrule the criteria when they examined a patient. Physicians should be better trained and motivated to correctly apply the evidence-based decision rules in order to improve the diagnostic process.
Purpose: The Ontario Ministry of Health and Long Term Care has established wait time targets for cancer patients who are in need of radiation therapy. We investigated methods to reduce wait times and improve efficiency in a radiation oncology clinic.
Method: We developed a discrete event simulation model using Simul8. The model includes a process flow chart that is representative of a typical patient encounter at a tertiary cancer centre, from referral to radiation oncology to the point of treatment. The model tracks patients routing through all important process steps including consultation, dosimetry, CT simulation, and treatment. The distribution of referrals and patient demographic factors are based on data from 2009 patient tracking records. Event durations were obtained from scheduling data and interviews with radiation therapists. Schedules for treatment and the time required by dosimetry and physics were obtained from in-house software tracking system. We used the model to investigate a number of process flow improvements including changes to physician and technician schedules and increases in the number of staff hours available. Wait times for each process configuration were estimated using the simulation model.
Result: The historical data of patients treated in 2009 shows an average wait time from referral to treatment date to first treatment of 19.545 days. The model was simulated for 365 days, accounting for weekends and holidays, and was warmed up for a period of 5 years. Adding an extra dosimetrist does not make a significant difference in a patient’s wait time. Adding an extra day of new patient consults reduces a patient’s wait time by 3%. Increasing the amount of time devoted by radiation oncologists to plan treatments by 0.5 days decreases a patient’s wait time by an average of 9%. Changing the treatment planning and contouring done by radiation oncologists, from designated planning slots to planning as needed anytime during the work week, does not significantly affect a patient’s wait time for treatment.
Conclusion: This model is helping to identify areas of improvement in the current system, to reduce patients’ wait times for radiotherapy and to make more efficient use of resources. In an environment of soaring health care expenditure and rising incidence of cancer, this sort of planning process will be valuable in obtaining optimal value for money.
Purpose:It is believed that many living donor liver transplants, particularly the ones including healthier recipients, are occurring earlier than the time that would maximize life expectancy. We consider whether patient risk preferences can explain this discrepancy.
Method:We formulate a finite-horizon risk-sensitive Markov decision process (MDP) for the problem of determining the optimal liver acceptance policy for a risk-sensitive patient with any von Neumann-Morgenstern utility function. Our model maximizes the patient's total expected utility, which is composed of a pre-transplant expected utility and a post-transplant expected utility The model provides the optimal action (either transplant or wait) for each combination of patient health state and donor liver quality in each stage. We extend our model to the infinite-horizon case for patients who exhibit an exponential utility function, which provides good approximations for many other types of utility functions.
Result:We calibrated our model using data from national sources as well as a major liver transplant center. We considered 468 patients who received a living-donor liver transplant between the years 2001-2008. For each patient, we found the risk preferences under which her observed transplant decision is optimal. Only 61 patients are observed to exhibit risk-neutral behavior (indicating the transplant decision maximized life expectancy), and we are able to explain the observed decisions of 157 patients (among the remaining 407) by using a risk averse exponential utility function. However, risk aversion fails to explain the transplant timing decisions in the remaining 250 cases. In other words, more than half of our cohort requires risk-seeking behavior in order to render their observed decision optimal. Furthermore, we also conclude from our results that sicker patients tend to be more risk-averse, and this leads them to transplant at a higher MELD score than a risk-neutral patient under the same conditions would do.
Conclusion:Even though risk-sensitivity may have a notable impact on the liver acceptance decisions, it cannot fully explain the patient behavior in the timing of living-donor liver transplants.
Purpose: To evaluate the impact of mass changes in social mixing patterns such as holiday traveling, public gatherings, or visiting medical providers on the course of an influenza pandemic in order to aid decision making on (i) whether to postpone/cancel mass gatherings or (ii) whether to open physically separate clinics for influenza treatment.
Method: We develop an agent-based model of influenza transmissions with daily social mixing through households, work or school groups, community locations, and temporary gathering sites that represent settings including mass gatherings, holiday travelling, and patients visiting hospitals/clinics with influenza-like-illness (ILI) symptoms. Using demographic data from the state of Georgia, we run experiments with different combinations of the length of the mass gathering or traveling period, the proportion of the population with changes in social mixing, the timing of providing clinics for persons with ILI and the likelihood of visiting hospitals/influenza clinics, and the reproductive number R0.
Result: Holiday traveling can lead to a second epidemic peak under certain scenarios. Mass gatherings that occur within 10 days before the epidemic peak can result in as high as 10% increases in the state-wide peak prevalence level and total attack rate, while both measures are 3-5% higher among the attendees and their family members than others. Persons with ILI visiting hospitals may contribute to a 5-10% higher peak prevalence level by exposing uninfected persons concomitantly seeking care in hospitals. Opening physically separate influenza clinics to manage persons with ILI can result in reduction of the peak prevalence level of 15% (when open throughout epidemic) or 12% (when open 40 days before and after the epidemic peak), even when patients are more likely to visit clinics than hospitals; both cases reduced hospital-acquired infections by 45%.
Conclusion: Our results show that pandemic response should not necessarily be slowed even after a large decline in influenza activity if social mixing patterns have been changing. Temporal learning suggests that postponing or cancelling large public gatherings may significantly reduce transmission of pandemic infections when occurring close to the epidemic peak but not much when occurring more than 1 month from the peak time. Creation of physically separate ILI management clinics effectively reduces influenza transmission occurring in the hospital setting and the prevalence level even if not open all days.
Purpose: To evaluate the impact of optimal treatment guidelines for the combined management of hyperlipidemia and hypertension for patients with type 2 diabetes.
Method: We developed a Markov decision process (MDP) model to determine the optimal start times for combined cholesterol and blood pressure medications for patients with type 2 diabetes. Health states were defined by cholesterol, blood pressure, hemoglobin A1c, and the static risk factors race, smoking status, and sex used by the United Kingdom Prospective Diabetes Study risk model. Transition probabilities and treatment effects are estimated from a longitudinal clinical dataset. Cost parameters are taken from secondary sources. The objective of the optimal guideline is to maximize expected benefits over the course of the patient’s lifetime. Benefits are defined by the difference in increased quality-adjusted life years (QALYs), based on a societal willingness-to-pay factor, minus costs of medication, treatment and long term care for CHD and stroke.
Result: We compute the optimal combined hypertension and hyperlipidemia treatment guidelines based on our MDP model. We compare the optimal guidelines to a simulation of a combination of the U.S. ATPIII and JNC7 guidelines to estimate the difference in expected QALYs and total discounted lifetime healthcare costs starting at age 40. For a willingness-to-pay of U.S. $100,000/QALY we find that the optimal guidelines outperform the ATPIII and JNC7 guidelines by adding an average of 0.049 QALYs for male patients and 0.166 QALYs for female patients and by lowering the lifetime costs by an average of $8,844 for male patients and $7,223 for female patients. The optimal guidelines suggested that patients in all health states should initiate statins. However, initiating and intensifying hypertension treatment varied based on health state and gender. The U.S. guidelines called for more intensive treatment of high blood pressure with a greater number of blood pressure medications being recommended; in addition, initiation of medications occurred earlier in life than with the optimal guidelines.
Conclusion: Using the ATPIII and JNC7 guidelines results in lower QALYs at increased costs for both males and females, compared to our optimal guidelines. The optimal guidelines would improve the efficiency of U.S. guidelines by providing large savings in both QALYs and costs at the population level.