Skip to yearly menu bar Skip to main content


Poster

Policy Teaching via Data Poisoning in Learning from Human Preferences

Debmalya Mandal · Danqi Liao · Goran Radanovic


Abstract: We study data poisoning attacks in learning from human preferences. More specifically, we consider the problem of teaching/enforcing a target policy π by synthesizing preference data. We seek to understand the susceptibility of different preference-based learning paradigms to poisoned preference data by analyzing the number of samples required by the attacker to enforce π. We first propose a general data poisoning formulation in learning from human preferences and then study it for two popular paradigms, namely: (a) reinforcement learning from human feedback (RLHF) that operates by learning a reward model using preferences; (b) direct preference optimization (DPO) that directly optimizes policy using preferences. We conduct a theoretical analysis of the effectiveness of data poisoning in a setting where the attacker is allowed to augment a pre-existing dataset and also study its special case where the attacker can synthesize the entire preference dataset from scratch. As our main results, we provide lower/upper bounds on the number of samples required to enforce π. Finally, we discuss the implications of our results in terms of the susceptibility of these learning paradigms under such data poisoning attacks.

Live content is unavailable. Log in and register to view live content