THE VALUE OF EVIDENCE: A RE-ANALYSIS OF THE USE OF STEROIDS IN HEAD INJURY

Monday, October 25, 2010
Sheraton Hall E/F (Sheraton Centre Toronto Hotel)
Susan Griffin, MSc, BSc, Karl Claxton, PhD, MSc, BA and Claire McKenna, PhD, University of York, York, United Kingdom

Purpose: We aimed to show how formal, quantitative assessments of cost-effectiveness and the value of further evidence can improve the process of research prioritisation by identifying research designs that offer the greatest improvement in health. 

Methods: Current methods for allocating funds between competing research proposals may be inefficient as they rely on intransparent and informal assessments of value.  The original funding bid for the CRASH trial focussed on the disease burden of head injury, the uncertain benefit of steroids in preventing death and their highly variable use in practice.  We retrospectively designed a cost-effectiveness analysis that made explicit the way available evidence was used to estimate an impact on overall health, including the opportunity costs of devoting resources to providing steroids.  We then conducted a series of assessments about the value of further research and the most valuable research design.

Results:   A meta-analysis in 1997 suggested that on average providing steroids would be life-saving, but uncertainty was high (95% confidence interval odds ratio for death -6% to 2%).  After incorporating additional evidence on length and quality of life steroids were expected to reduce the number healthy years lived as more survivors were left severely disabled or in a vegetative state.  Incorporating opportunity costs indicated that steroids were expected to reduce net health benefits.  The maximum value of further evidence was high (£67m) as was the value of reducing practice variation (£125m).  To provide the greatest value a research design should focus on the efficacy of steroids with endpoints of the number of patients left dead, severely disabled or in a vegetative state; an endpoint of death alone would not be sufficient.  The value of the CRASH design exceeded its costs (research funding and opportunity costs to recruited patients), assuming that research would impact on practice. 

Conclusions:   Using cost-effectiveness analysis to estimate the impact of research designs on overall health adds value to the research prioritisation process by enabling: (i) comparison of all competing proposals on the same basis; (ii) efficient, transparent and accountable allocation of funds; and (iii) optimisation of research design to ensure that further evidence directly addresses decision uncertainty.  However, the value of evidence depends on its translation into clinical practice; more cost-effective methods may exist to change practice than large clinical trials.