Skip to yearly menu bar Skip to main content


Poster

Warping Layer: Representation Learning for Label Structures in Weakly Supervised Learning

Yingyi Ma · Xinhua Zhang


Abstract:

Many learning tasks only receive weak supervision, such as semi-supervised learning and few-shot learning. With limited labeled data, prior structures become especially important, and prominent examples include hierarchies and mutual exclusions in the class space. However, most existing approaches only learn the representations \emph{separately} in the feature space and the label space, and do not explicitly enforce the logical relationships. In this paper, we propose a novel warping layer that \emph{jointly} learns representations in \emph{both} spaces, and thanks to the modularity and differentiability, it can be directly embedded into generative models to leverage the prior hierarchical structure and unlabeled data. The effectiveness of the warping layer is demonstrated on both few-shot and semi-supervised learning, outperforming the state of the art in practice.

Chat is not available.