Skip to yearly menu bar Skip to main content


Poster

Domain Adaptation and Entanglement: an Optimal Transport Perspective

Danqi Liao · Alexander Soen · Chao-Kai Chiang · Masashi Sugiyama


Abstract:

Current machine learning systems are brittle in the face of distribution shifts (DS), where the target distribution that the system is tested on differs from the source distribution used to train the system. This problem of robustness to DS has been studied extensively in the field of domain adaptation. For deep neural networks, popular methods for unsupervised domain adaptation (UDA) are domain matching methods that try to align the marginal distributions in the feature or output space. The current theoretical understanding of these methods, however, are limited and existing theoretical frameworks are not precise enough to characterize their performance in practice. To this end, we derive new bounds based on optimal transport that analyze the UDA problem. Our new bound includes a term which we dub as entanglement, consisting of an expectation of Wasserstein distance between conditionals with respect to changing data distributions. Analysis of the entanglement term provides a novel perspective on the unoptimizable aspects of UDA. In various experiments with multiple models across several DS scenarios, we show that this term can be used to explain the varying performance of UDA algorithms.

Live content is unavailable. Log in and register to view live content