Poster
Last iterate convergence in no-regret learning: constrained min-max optimization for convex-concave landscapes
Qi Lei · Sai Ganesh Nagarajan · Ioannis Panageas · xiao wang
Keywords: [ Online Learning ] [ Learning Theory and Statistics ]
In a recent series of papers it has been established that variants of Gradient Descent/Ascent and Mirror Descent exhibit last iterate convergence in convex-concave zero-sum games. Specifically, Daskalakis et al 2018, Liang-Stokes 2019, show last iterate convergence of the so called ``Optimistic Gradient Descent/Ascent" for the case of \textit{unconstrained} min-max optimization. Moreover, in Mertikopoulos et al 2019 the authors show that Mirror Descent with an extra gradient step displays last iterate convergence for convex-concave problems (both constrained and unconstrained), though their algorithm uses \textit{vanishing stepsizes}. In this work, we show that "Optimistic Multiplicative-Weights Update (OMWU)" with \textit{constant stepsize}, exhibits last iterate convergence locally for convex-concave games, generalizing the results of Daskalakis and Panageas 2019 where last iterate convergence of OMWU was shown only for the \textit{bilinear case}. To the best of our knowledge, this is the first result about last-iterate convergence for constrained zero sum games (beyond the bilinear case) in which the dynamics use constant step-sizes.