Poster
Adversarial robustness of VAEs through the lens of local geometry
Asif Khan · Amos Storkey
[
Abstract
]
2023 Poster
Abstract:
In an unsupervised attack on variational autoencoders (VAEs) an adversary finds a small perturbation in an input sample that significantly changes its latent space encoding, thereby compromising the reconstruction for a fixed decoder. A known reason for such vulnerability is the distortions in the latent space resulting from a mismatch between approximated latent posterior and a prior distribution. Consequently, a slight change in an input sample can move its encoding to a low/zero density region in the latent space resulting in an unconstrained generation. This paper demonstrates that an optimal way for an adversary to attack VAEs is to exploit a directional bias of a stochastic pullback metric tensor induced by the encoder and decoder networks. The pullback metric tensor of an encoder measures the change in infinitesimal latent volume from an input to a latent space. Thus, it can be viewed as a lens to analyse the effect of input perturbations leading to latent space distortions. We propose robustness evaluation scores using the eigenspectrum of a pullback metric tensor. Moreover, we empirically show that the scores correlate with the robustness parameter $\beta$ of the $\beta-$VAE. Since increasing $\beta$ also compromises reconstruction quality, we demonstrate a simple strategy using \textit{mixup} training to fill the empty regions in the latent space, thus improving robustness with improved reconstruction.
Live content is unavailable. Log in and register to view live content
Successful Page Load