Skip to yearly menu bar Skip to main content


Poster

Bayesian Active Learning by Soft Mean Objective Cost of Uncertainty

Guang Zhao · Edward Dougherty · Byung-Jun Yoon · Francis J. Alexander · Xiaoning Qian

Virtual

Keywords: [ Active Learning ] [ Learning Theory and Statistics ]


Abstract:

To achieve label efficiency for training supervised learning models, pool-based active learning sequentially selects samples from a set of candidates as queries to label by optimizing an acquisition function. One category of existing methods adopts one-step-look-ahead strategies based on acquisition functions tailored with the learning objectives, for example based on the expected loss reduction (ELR) or the mean objective cost of uncertainty (MOCU) proposed recently. These active learning methods are optimal with the maximum classification error reduction when one considers a single query. However, it is well-known that there is no performance guarantee in the long run for these myopic methods. In this paper, we show that these methods are not guaranteed to converge to the optimal classifier of the true model because MOCU is not strictly concave. Moreover, we suggest a strictly concave approximation of MOCU---Soft MOCU---that can be used to define an acquisition function to guide Bayesian active learning with theoretical convergence guarantee. For training Bayesian classifiers with both synthetic and real-world data, our experiments demonstrate the superior performance of active learning by Soft MOCU compared to other existing methods.

Chat is not available.