Skip to yearly menu bar Skip to main content


Poster

Statistical Guarantees for Unpaired Image-to-Image Cross-Domain Analysis using GANs

Raaz Dwivedi · Michael Jordan


Abstract: The field of unpaired image-to-image translation has undergone a significant transformation with the introduction of Generative Adversarial Networks (GANs), with CycleGAN and DiscoGAN as prominent variants. While these models show impressive empirical performance, their statistical properties are under-studied. In this paper, we propose a framework for analyzing the generalization error in cross-domain deep generative models. Our findings reveal that when provided with independent and identically distributed (i.i.d.) samples from two domains, the translation error, measured under the Wasserstein-1 loss, scales as ˜O(min(n,m)1/max(d,˜d))~O(min(n,m)1/max(d,~d)),provided that the true model possesses sufficient smoothness and the network sizes are chosen appropriately.Here, nn and mm represent the sizes of the sample sets, while dd and ˜d~d denote the dimensions of the respective data domains. Furthermore, we highlight the importance of a cycle loss term for ensuring distributional cycle consistency.Additionally, we provide insights into the relationship between the network size and the number of data points. Notably, as the true model exhibits greater smoothness, it suffices to work with smaller networks.

Live content is unavailable. Log in and register to view live content