Meeting Brochure and registration form      SMDM Homepage

Monday, 16 October 2006


Kristel J.M. Janssen, MSc, K.G.M. Moons, PhD, D.E. Grobbee, PhD, MD, and Y. Vergouwe, PhD. University Medical Center Utrecht, Utrecht, Netherlands

Purpose The performance of clinical prediction models is often poor in new patients. Many researchers are then tempted to develop a new model by re-estimating the regression coefficients with the new data or to apply a new selection strategy, ignoring the information that is captured in the original model. We compared five updating methods that use information of both the original model and the new patients, to update a previously developed prediction model that preoperatively predicts the risk of severe postoperative pain. Methods The prediction model was derived on a derivation set of 1944 surgical patients, selected in 1997 and 1998, in the Amsterdam Medical Center, the Netherlands. The model was validated and updated (n=752) and subsequently tested (n=283) in patients selected from another hospital (University Medical Center Utrecht), 6 years later (2004). The performance was estimated in terms of calibration and discrimination. Calibration was graphically assessed with a calibration plot. Discrimination was quantified with the concordance (c-) statistic. Five updating methods were compared. The first two methods were simple recalibration methods that did not change the relative effects of the regression coefficients from the original model. In method 1, only the model intercept was adjusted. In method 2, the model intercept was adjusted and the regression coefficients were multiplied with the calibration slope. Method 3 tested whether the effects of the predictors were different in the updating set. In method 4 and 5, the intercept and the regression coefficients of all predictors were re-estimated in the updating set (method 4) and the combined derivation and updating set (method 5). Results The c-statistic of the original model was 0.65 and showed good calibration in the derivation set. Calibration of this model in the validation set was poor and substantially improved by all updating methods, in both the updating and test set. The c-statistic did not improve with recalibration methods 1 and 2, since the ranking of the predictions was not changed. Updating methods 3, 4 and 5 did result in an increased c-statistic (0.70, 0.71 and 0.72 respectively). However, this increase did not hold in the test set (c-statistic decreased to 0.66). Conclusion When the performance of prediction models is poor in new patients, a simple updating method may be sufficient to improve the calibration.

See more of Poster Session II
See more of The 28th Annual Meeting of the Society for Medical Decision Making (October 15-18, 2006)