Skip to yearly menu bar Skip to main content


Poster

Mediated Uncoupled Learning and Validation with Bregman Divergences: Loss Family with Maximal Generality

Ikko Yamane · Yann Chevaleyre · Takashi Ishida · Florian Yger

Auditorium 1 Foyer 67

Abstract: In \emph{mediated uncoupled learning} (MU-learning), the goal is to predict an output variable $Y$ given an input variable $X$ as in ordinary supervised learning while the training dataset has no joint samples of $(X, Y)$ but only independent samples of $(X, U)$ and $(U, Y)$ each observed with a \emph{mediating} variable $U$. The existing MU-learning methods can only handle the squared loss, which prohibited the use of other popular loss functions such as the cross-entropy loss. We propose a general MU-learning framework that allows for the problems with Bregman divergences, which cover a wide range of loss functions useful for various types of tasks, in a unified manner. This loss family has \emph{maximal generality} among those whose minimizers characterize the conditional expectation. We prove that the proposed objective function is a tighter approximation to the oracle loss that one would minimize if ordinary supervised samples of $(X, Y)$ were available. We also propose an estimator of an interval containing the expected test loss of predictions of a trained model only using $(X, U)$- and $(U, Y)$-data. We provide a theoretical analysis on the excess risk for the proposed method and confirm its practical usefulness with regression experiments with synthetic data and low-quality image classification experiments with benchmark datasets.

Live content is unavailable. Log in and register to view live content