Skip to yearly menu bar Skip to main content


Poster

Online Calibrated and Conformal Prediction Improves Bayesian Optimization

Shachi Shailesh Deshpande · Charles Marx · Volodymyr Kuleshov

MR1 & MR2 - Number 41
[ ]
Thu 2 May 8 a.m. PDT — 8:30 a.m. PDT

Abstract:

Accurate uncertainty estimates are important in sequential model-based decision-making tasks such as Bayesian optimization. However, these estimates can be imperfect if the data violates assumptions made by the model (e.g., Gaussianity). This paper studies which uncertainties are needed in model-based decision-making and in Bayesian optimization, and argues that uncertainties can benefit from calibration—i.e., an 80\% predictive interval should contain the true outcome 80\% of the time. Maintaining calibration, however, can be challenging when the data is non-stationary and depends on our actions. We propose using simple algorithms based on online learning to provably maintain calibration onnon-i.i.d. data, and we show how to integrate these algorithms in Bayesian optimization with minimal overhead. Empirically, we find that calibrated Bayesian optimization converges to better optima in fewer steps, and we demonstrate improved performance on standard benchmark functions and hyperparameter optimization tasks.

Chat is not available.