Skip to yearly menu bar Skip to main content


Poster

Differential Privacy in Distributed Learning: Beyond Uniformly Bounded Stochastic Gradients

Yue Huang · Marcus Häggbom · Qing Ling


Abstract: This paper explores locally differentially private distributed algorithms that solve non-convex empirical risk minimization problems. Traditional approaches often assume uniformly bounded stochastic gradients, which may not hold in practice. To address this issue, we propose differentially **Pri**vate **S**tochastic recursive **M**omentum with gr**A**dient clipping (PriSMA) that judiciously integrates clipping and momentum to enhance utility while guaranteeing privacy. Without assuming uniformly bounded stochastic gradients, given privacy requirement (ϵ,δ), PriSMA achieves a learning error of O~((dMNϵ)25), where M is the number of clients, N is the number of data samples on each client and d is the model dimension. This learning error bound is better than the state-of-the-art O~((dMNϵ)13) in terms of the dependence on M and N.

Live content is unavailable. Log in and register to view live content