Skip to yearly menu bar Skip to main content


Poster

Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time

yuyang deng · Mohammad Mahdi Kamani · Mehrdad Mahdavi


Abstract:

In this paper we prove that Local (S)GD (or FedAvg) can optimize deep neural networks with Rectified LinearUnit (ReLU) activation function in polynomial time. Despite the established convergence theory of Local SGD on optimizing general smooth functions in communication-efficient distributed optimization, its convergence on non-smooth ReLU networks still eludes full theoretical understanding. The key property used in many Local SGD analysis on smooth function is gradient Lipschitzness, so that the gradient on local models will not drift far away from that on averaged model. However, this decent property does not hold in networks with non-smooth ReLU activation function. We show that, even though ReLU network does not admit gradient Lipschitzness property, the difference between gradients on local models and average model will not change too much, under the dynamics of Local SGD. We validate our theoretical results via extensive experiments. This work is the first to show the convergence of Local SGD on non-smooth functions, and will shed lights on the optimization theory of federated training of deep neural networks.

Chat is not available.