Noise Regularizes Over-parameterized Rank One Matrix Recovery, Provably

Tianyi Liu · Yan Li · Enlu Zhou · Tuo Zhao

[ Abstract ]
Mon 28 Mar 4:30 a.m. PDT — 6 a.m. PDT
Oral presentation: Oral 1: Learning theory / General ML
Mon 28 Mar 1:30 a.m. PDT — 2:30 a.m. PDT

Abstract: We investigate the role of noise in optimization algorithms for learning over-parameterized models. Specifically, we consider the recovery of a rank one matrix $Y^*\in R^{d\times d}$ from a noisy observation $Y$ using an over-parameterization model. Specifically, we parameterize the rank one matrix $Y^*$ by $XX^\top$, where $X\in R^{d\times d}$. We then show that under mild conditions, the estimator, obtained by the randomly perturbed gradient descent algorithm using the square loss function, attains a mean square error of $O(\sigma^2/d)$, where $\sigma^2$ is the variance of the observational noise. In contrast, the estimator obtained by gradient descent without random perturbation only attains a mean square error of $O(\sigma^2)$. Our result partially justifies the implicit regularization effect of noise when learning over-parameterized models, and provides new understanding of training over-parameterized neural networks.

Chat is not available.