Skip to yearly menu bar Skip to main content


Poster

Feedback Coding for Active Learning

Gregory Canal · Matthieu Bloch · Christopher Rozell

Virtual

Keywords: [ Applications ] [ Computer Vision ] [ Algorithms -> Semi-Supervised Learning; Deep Learning -> Optimization for Deep Networks; Optimization ] [ Combinatorial Optimiza ] [ Active Learning ] [ Learning Theory and Statistics ]


Abstract:

The iterative selection of examples for labeling in active machine learning is conceptually similar to feedback channel coding in information theory: in both tasks, the objective is to seek a minimal sequence of actions to encode information in the presence of noise. While this high-level overlap has been previously noted, there remain open questions on how to best formulate active learning as a communications system to leverage existing analysis and algorithms in feedback coding. In this work, we formally identify and leverage the structural commonalities between the two problems, including the characterization of encoder and noisy channel components, to design a new algorithm. Specifically, we develop an optimal transport-based feedback coding scheme called Approximate Posterior Matching (APM) for the task of active example selection and explore its application to Bayesian logistic regression, a popular model in active learning. We evaluate APM on a variety of datasets and demonstrate learning performance comparable to existing active learning methods, at a reduced computational cost. These results demonstrate the potential of directly deploying concepts from feedback channel coding to design efficient active learning strategies.

Chat is not available.