Poster
Private Non-Convex Federated Learning Without a Trusted Server
Andrew Lowy · Ali Ghafelebashi · Meisam Razaviyayn
Auditorium 1 Foyer 38
[
Abstract
]
Abstract:
We study federated learning (FL)–especially cross-silo FL–with non-convex loss functions and data from people who do not trust the server or other silos. In this setting, each silo (e.g. hospital) must protect the privacy of each person’s data (e.g. patient’s medical record), even if the server or other silos act as adversarial eavesdroppers. To that end, we consider inter-silo record-level (ISRL) differential privacy (DP), which requires silo $i$’s communications to satisfy record/item-level DP. We give novel ISRL-DP algorithms for FL with heterogeneous (non-i.i.d.) silo data and two classes of Lipschitz continuous loss functions: First, we consider losses satisfying the Proximal Polyak-Łojasiewicz (PL) inequality, which is an extension of the classical PL condition to the constrained setting. Prior works only considered unconstrained private optimization with Lipschitz PL loss, which rules out most interesting PL losses such as strongly convex problems and linear/logistic regression. However, by analyzing the proximal PL scenario, we permit these losses and others (e.g. LASSO, some neural nets) which are Lipschitz on a restricted parameter domain. Our algorithms nearly attain the optimal strongly convex, homogeneous (i.i.d.) rate for ISRL-DP FL without assuming convexity or i.i.d. data. Second, we give the first private algorithms for non-convex non-smooth loss functions. Our utility bounds even improve on the state-of-the-art bounds for smooth losses. We complement our upper bounds with lower bounds. Additionally, we provide shuffle DP (SDP) algorithms that improve over the state-of-the-art central DP algorithms under more practical trust assumptions. Numerical experiments show that our algorithm has better accuracy than baselines for most privacy levels.
Live content is unavailable. Log in and register to view live content