Skip to yearly menu bar Skip to main content


Randomized Primal-Dual Methods with Adaptive Step Sizes

Erfan Yazdandoost Hamedani · Afrooz Jalilzadeh · Necdet Serhat Aybat

Auditorium 1 Foyer 110

Abstract: In this paper we propose a class of randomized primal-dual methods incorporating line search to contend with large-scale saddle point~(SP) problems defined by a convex-concave function $\mathcal L(\bx,y)\triangleq \sum_{i=1}^M f_i(x_i)+\Phi(\bx,y)-h(y)$. We analyze the convergence rate of the proposed method under mere convexity and strong convexity assumptions of $\mathcal L$ in $\bf x$-variable. In particular, assuming $\nabla_y\Phi(\cdot,\cdot)$ is Lipschitz and $\nabla_{\bf x}\Phi(\cdot,y)$ is coordinate-wise Lipschitz for any fixed $y$, the ergodic sequence generated by the algorithm achieves the $\mathcal O(M/k)$ convergence rate in the expected primal-dual gap.Furthermore, assuming that $\mathcal L(\cdot,y)$ is strongly convex for any $y$, and that $\Phi({\bf x},\cdot)$ is affine for any $\bf x$, the scheme enjoys a faster rate of $\mathcal O(M/k^2)$ in terms of primal solution suboptimality. We implemented the proposed algorithmic framework to solve kernel matrix learning problem, and tested it against other state-of-the-art first-order methods.

Live content is unavailable. Log in and register to view live content