Skip to yearly menu bar Skip to main content


Poster

On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks

Hongru Yang · Zhangyang Wang

Auditorium 1 Foyer 25

Abstract:

Motivated by both theory and practice, this work studies how random pruning the weights affects a neural network's neural tangent kernel (NTK). In particular, this work establishes a equivalence of the NTKs between a fully-connected neural network and its randomly pruned version. The equivalence is established under two cases. The first main result studies the infinite-width asymptotic. It is shown that given a pruning probability, for fully-connected neural networks with the weights randomly pruned at the initialization, as the width of each layer grows to infinity sequentially, the NTK of the pruned neural network converges to the limiting NTK of the original network with some extra scaling. If the network weights are rescaled appropriately after pruning, this extra scaling can be removed. The second main result considers the finite width case. It is shown that to ensure the NTK's closeness to the limit, the dependence of width on the sparsity parameter is asymptotically linear, as the NTK's gap to its limit goes down to zero. Moreover, if the pruning probability is set to zero (i.e., no pruning), the bound on the required width matches the bound for fully-connected neural networks in previous works up to logarithmic factors. The proof of this result requires developing novel analysis of a network structure which we called \textit{mask-induced pseudo-networks}.Experiments are provided to evaluate our results.

Live content is unavailable. Log in and register to view live content