Direct-Search for a Class of Stochastic Min-Max Problems

Sotiris Anagnostidis · Aurelien Lucchi · Youssef Diouane

Keywords: [ Algorithms, Optimization and Computation Methods ] [ Nonconvex Optimization ]

[ Abstract ]
Wed 14 Apr 6 a.m. PDT — 8 a.m. PDT


Recent applications in machine learning have renewed the interest of the community in min-max optimization problems. While gradient-based optimization methods are widely used to solve such problems, there are however many scenarios where these techniques are not well-suited, or even not applicable when the gradient is not accessible. We investigate the use of direct-search methods that belong to a class of derivative-free techniques that only access the objective function through an oracle. In this work, we design a novel algorithm in the context of min-max saddle point games where one sequentially updates the min and the max player. We prove convergence of this algorithm under mild assumptions, where the objective of the max-player satisfies the Polyak-\L{}ojasiewicz (PL) condition, while the min-player is characterized by a nonconvex objective. Our method only assumes dynamically adjusted accurate estimates of the oracle with a fixed probability. To the best of our knowledge, our analysis is the first one to address the convergence of a direct-search method for min-max objectives in a stochastic setting.

Chat is not available.