Skip to yearly menu bar Skip to main content


Poster

LOFT: Finding Lottery Tickets through Filter-wise Training

Qihan Wang · Chen Dun · Fangshuo Liao · Chris Jermaine · Anastasios Kyrillidis

Auditorium 1 Foyer 20

Abstract:

Recent work on the Lottery Ticket Hypothesis (LTH) shows that there exist winning tickets'' in large neural networks. These tickets representsparse'' versions of the full model that can be trained independently to achieve comparable accuracy with respect to the full model. However, finding the winning tickets requires one to pretrain the large model for at least a number of epochs, which can be a burdensome task, especially when the original neural network gets larger. In this paper, we explore how one can efficiently identify the emergence of such winning tickets, and use this observation to design efficient pretraining algorithms.For clarity of exposition, our focus is on convolutional neural networks (CNNs). To identify good filters, we propose a novel filter distance metric that well-represents the model convergence. As our theory dictates, our filter analysis behaves consistently with recent findings of neural network learning dynamics. Motivated by these observations, we present the LOttery ticket through Filter-wise Training algorithm, dubbed as LoFT. LoFT is a model-parallel pretraining algorithm that partitions convolutional layers by filters to train them independently in a distributed setting, resulting in reduced memory and communication costs during pretraining. Experiments show that LoFT i) preserves and finds good lottery tickets, while ii) it achieves non-trivial computation and communication savings, and maintains comparable or even better accuracy than other pretraining methods.

Live content is unavailable. Log in and register to view live content