Skip to yearly menu bar Skip to main content


Poster

Sample-Efficient Personalization: Modeling User Parameters as Low Rank Plus Sparse Components

Soumyabrata Pal · Prateek Varshney · Gagan Madan · Prateek Jain · Abhradeep Thakurta · Gaurav Aggarwal · Pradeep Shenoy · Gaurav Srivastava

MR1 & MR2 - Number 115

Abstract: Personalization of machine learning (ML) predictions for individual users/domains/enterprises is critical for practical recommendation systems. Standard personalization approaches involve learning a user/domain specific \emph{embedding} that is fed into a fixed global model which can be limiting. On the other hand, personalizing/fine-tuning model itself for each user/domain --- a.k.a meta-learning --- has high storage/infrastructure cost. Moreover, rigorous theoretical studies of scalable personalization approaches have been very limited. To address the above issues, we propose a novel meta-learning style approach that models network weights as a sum of low-rank and sparse components. This captures common information from multiple individuals/users together in the low-rank part while sparse part captures user-specific idiosyncrasies. We then study the framework in the linear setting, where the problem reduces to that of estimating the sum of a rank-$r$ and a $k$-column sparse matrix using a small number of linear measurements. We propose a computationally efficient alternating minimization method with iterative hard thresholding --- AMHT-LRS --- to learn the low-rank and sparse part. Theoretically, for the realizable Gaussian data setting, we show that AMHT-LRS solves the problem efficiently with nearly optimal sample complexity. Finally, a significant challenge in personalization is ensuring privacy of each user's sensitive data. We alleviate this problem by proposing a differentially private variant of our method that also is equipped with strong generalization guarantees.

Chat is not available.