Skip to yearly menu bar Skip to main content


Poster

Generalization Bounds for Stochastic Saddle Point Problems

Junyu Zhang · Mingyi Hong · Mengdi Wang · Shuzhong Zhang

Keywords: [ Algorithms, Optimization and Computation Methods ] [ Convex optimization ]


Abstract: This paper studies the generalization bounds for the empirical saddle point (ESP) solution to stochastic saddle point (SSP) problems. For SSP with Lipschitz continuous and strongly convex-strongly concave objective functions, we establish an {\footnotesize$\cO\left(1/n\right)$} generalization bound by using a probabilistic stability argument. We also provide generalization bounds under a variety of assumptions, including the cases without strong convexity and without bounded domains. We illustrate our results in three examples: batch policy learning in Markov decision process, stochastic composite optimization problem, and mixed strategy Nash equilibrium estimation for stochastic games. In each of these examples, we show that a regularized ESP solution enjoys a near-optimal sample complexity. To the best of our knowledge, this is the first set of results on the generalization theory of ESP.

Chat is not available.