Skip to yearly menu bar Skip to main content


Poster

Understanding the Generalization Benefits of Late Learning Rate Decay

Yinuo Ren · Chao Ma · Lexing Ying

Multipurpose Room 2 - Number 113

Abstract:

Why do neural networks trained with large learning rates for longer time often lead to better generalization? In this paper, we delve into this question by examining the relation between training and testing loss in neural networks. Through visualization of these losses, we note that the training trajectory with a large learning rate navigates through the minima manifold of the training loss, finally nearing the neighborhood of the testing loss minimum. Motivated by these findings, we introduce a nonlinear model whose loss landscapes mirror those observed for real neural networks. Upon investigating the training process using SGD on our model, we demonstrate that an extended phase with a large learning rate steers our model towards the minimum norm solution of the training loss, which may achieve near-optimal generalization, thereby affirming the empirically observed benefits of late learning rate decay.

Live content is unavailable. Log in and register to view live content