Poster
Mediated Uncoupled Learning and Validation with Bregman Divergences: Loss Family with Maximal Generality
Ikko Yamane · Yann Chevaleyre · Takashi Ishida · Florian Yger
Auditorium 1 Foyer 67
[
Abstract
]
Abstract:
In \emph{mediated uncoupled learning} (MU-learning), the goal is to predict an output variable given an input variable as in ordinary supervised learning while the training dataset has no joint samples of but only independent samples of and each observed with a \emph{mediating} variable . The existing MU-learning methods can only handle the squared loss, which prohibited the use of other popular loss functions such as the cross-entropy loss. We propose a general MU-learning framework that allows for the problems with Bregman divergences, which cover a wide range of loss functions useful for various types of tasks, in a unified manner. This loss family has \emph{maximal generality} among those whose minimizers characterize the conditional expectation. We prove that the proposed objective function is a tighter approximation to the oracle loss that one would minimize if ordinary supervised samples of were available. We also propose an estimator of an interval containing the expected test loss of predictions of a trained model only using - and -data. We provide a theoretical analysis on the excess risk for the proposed method and confirm its practical usefulness with regression experiments with synthetic data and low-quality image classification experiments with benchmark datasets.
Live content is unavailable. Log in and register to view live content