Poster
Differential Privacy in Distributed Learning: Beyond Uniformly Bounded Stochastic Gradients
Yue Huang · Marcus Häggbom · Qing Ling
[
Abstract
]
Abstract:
This paper explores locally differentially private distributed algorithms that solve non-convex empirical risk minimization problems. Traditional approaches often assume uniformly bounded stochastic gradients, which may not hold in practice. To address this issue, we propose differentially **Pri**vate **S**tochastic recursive **M**omentum with gr**A**dient clipping (PriSMA) that judiciously integrates clipping and momentum to enhance utility while guaranteeing privacy. Without assuming uniformly bounded stochastic gradients, given privacy requirement , PriSMA achieves a learning error of , where is the number of clients, is the number of data samples on each client and is the model dimension. This learning error bound is better than the state-of-the-art in terms of the dependence on and .
Live content is unavailable. Log in and register to view live content