Skip to yearly menu bar Skip to main content


On the Privacy Risks of Algorithmic Recourse

Martin Pawelczyk · Himabindu Lakkaraju · Seth Neel

Auditorium 1 Foyer 145


As predictive models are increasingly being employed to make consequential decisions, there is a growing emphasis on developing techniques that can provide recourse to affected individuals. While such recourses can be immensely beneficial to affected individuals, potential adversaries could also exploit these recourses to compromise privacy. In this work, we initiate the first study investigating if and how an adversary can leverage recourses to infer private information about the underlying model's training data. To this end, we propose a series of novel membership inference attacks which leverage algorithmic recourse. More specifically, we generalize the loss-based attacks proposed in the privacy literature to account for the information captured by algorithmic recourse, and introduce a general class of membership inference attacks against recourses called distance-based attacks. Lastly, we experiment with multiple real world datasets and recourse methods that demonstrate the effectiveness of our attacks; firmly establishing the privacy risks inherent in providing algorithmic recourse. Our results indicate that not only does our distance-based attack outperform random guessing, but it also outperforms the loss-based membership inference attack at low false positive rates. This finding demonstrates that our distance-based attacks have applications beyond the recourse setting to the general problem of membership inference against machine learning models.

Live content is unavailable. Log in and register to view live content