Skip to yearly menu bar Skip to main content


Poster

A General Theoretical Paradigm to Understand Learning from Human Preferences

Mohammad Gheshlaghi Azar · Zhaohan Daniel Guo · Bilal Piot · Remi Munos · Mark Rowland · Michal Valko · Daniele Calandriello

MR1 & MR2 - Number 56
[ ]
Fri 3 May 8 a.m. PDT — 8:30 a.m. PDT

Abstract: The prevalent deployment of learning from human preferences through reinforcement learning (RLHF) relies on two important approximations: the first assumes that pairwise preferences can be substituted with pointwise rewards. The second assumes that a reward model trained on these pointwise rewards can generalize from collected data to out-of-distribution data sampled by the policy. Recently, Direct Preference Optimisation DPO has been proposed as an approach that bypasses the second approximation and learn directly a policy from collected data without the reward modelling stage. However, this method still heavily relies on the first approximation.In this paper we try to gain a deeper theoretical understanding of these practical algorithms. In particular we derive a new general objective called ${\Psi}$PO for learning from human preferences that is expressed in terms of pairwise preferences and therefore bypasses both approximations. This new general objective allows us to perform an in-depth analysis of the behavior of RLHF and DPO (as special cases of ${\Psi}$PO) and to identify their potential pitfalls. We then consider another special case for ${\Psi}$PO by setting $\Psi$ simply to Identity, for which we can derive an efficient optimisation procedure, prove performance guarantees and demonstrate itsempirical superiority to DPO on some illustrative examples.

Chat is not available.