Skip to yearly menu bar Skip to main content


The effect of Leaky ReLUs on the training and generalization of overparameterized networks

Yinglong Guo · Shaohan Li · Gilad Lerman

MR1 & MR2 - Number 111
[ ]
Sat 4 May 6 a.m. PDT — 8:30 a.m. PDT

Abstract: We investigate the training and generalization errors of overparameterized neural networks (NNs) with a wide class of leaky rectified linear unit (ReLU) functions. More specifically, we carefully upper bound both the convergence rate of the training error and the generalization error of such NNs and investigate the dependence of these bounds on the Leaky ReLU parameter, $\alpha$. We show that $\alpha =-1$, which corresponds to the absolute value activation function, is optimal for the training error bound. Furthermore, in special settings, it is also optimal for the generalization error bound. Numerical experiments empirically support the practical choices guided by the theory.

Live content is unavailable. Log in and register to view live content