Skip to yearly menu bar Skip to main content


Stochastic Extragradient with Random Reshuffling: Improved Convergence for Variational Inequalities

Konstantinos Emmanouilidis · Rene Vidal · Nicolas Loizou

MR1 & MR2 - Number 127
[ ]
Sat 4 May 6 a.m. PDT — 8:30 a.m. PDT


The Stochastic Extragradient (SEG)method is one of the most popular algorithms for solving finite-sum min-maxoptimization and variational inequality problems (VIPs) appearing in variousmachine learning tasks. However, existingconvergence analyses of SEG focus on itswith-replacement variants, while practicalimplementations of the method randomlyreshuffle components and sequentiallyuse them. Unlike the well-studied with-replacement variants, SEG with RandomReshuffling (SEG-RR) lacks establishedtheoretical guarantees. In this work, weprovide a convergence analysis of SEG-RRfor three classes of VIPs: (i) stronglymonotone, (ii) affine, and (iii) monotone. We derive conditions underwhich SEG-RR achieves a faster convergence rate thanthe uniform with-replacement samplingSEG. In the monotone setting, our analysis of SEG-RR guarantees convergence to an arbitrary accuracy without large batch sizes, a strong requirement needed in the classical with-replacement SEG. As a byproduct of our results, we provide convergence guarantees forShuffle Once SEG (shuffles the data onlyat the beginning of the algorithm) and theIncremental Extragradient (does not shuffle thedata). We supplement our analysis with experiments validating empirically the superior performance of SEG-RR over the classical with-replacement sampling SEG.

Live content is unavailable. Log in and register to view live content