Skip to yearly menu bar Skip to main content


Poster

Shuffled Model of Differential Privacy in Federated Learning

Antonious Girgis · Deepesh Data · Suhas Diggavi · Peter Kairouz · Ananda Theertha Suresh

Virtual

Keywords: [ Ethics and Safety ] [ Privacy-preserving Statistics and Machine Learning ]


Abstract: We consider a distributed empirical risk minimization (ERM) optimization problem with communication efficiency and privacy requirements, motivated by the federated learning (FL) framework. We propose a distributed communication-efficient and local differentially private stochastic gradient descent (CLDP-SGD) algorithm and analyze its communication, privacy, and convergence trade-offs. Since each iteration of the CLDP-SGD aggregates the client-side local gradients, we develop (optimal) communication-efficient schemes for mean estimation for several $\ell_p$ spaces under local differential privacy (LDP). To overcome performance limitation of LDP, CLDP-SGD takes advantage of the inherent privacy amplification provided by client subsampling and data subsampling at each selected client (through SGD) as well as the recently developed shuffled model of privacy. For convex loss functions, we prove that the proposed CLDP-SGD algorithm matches the known lower bounds on the \textit{centralized} private ERM while using a finite number of bits per iteration for each client, \emph{i.e.,} effectively getting communication efficiency for ``free''. We also provide preliminary experimental results supporting the theory.

Chat is not available.