Skip to yearly menu bar Skip to main content


Poster

Learning to Optimize for Stochastic Dominance Constraints

Hanjun Dai · Yuan Xue · Niao He · Yixin Wang · Na Li · Dale Schuurmans · Bo Dai

Auditorium 1 Foyer 92

Abstract:

In real-world decision-making, uncertainty is important yet difficult to handle. Stochastic dominance provides a theoretically sound approach to comparing uncertain quantities, but optimization with stochastic dominance constraints is often computationally expensive, which limits practical applicability. In this paper, we develop a simple yet efficient approach for the problem, Light Stochastic Dominance Solver (light-SD), by leveraging properties of the Lagrangian. We recast the inner optimization in the Lagrangian as a learning problem for surrogate approximation, which bypasses the intractability and leads to tractable updates or even closed-form solutions for gradient calculations. We prove convergence of the algorithm and test it empirically. The proposed light-SD demonstrates superior performance on several representative problems ranging from finance to supply chain management.

Live content is unavailable. Log in and register to view live content