Skip to yearly menu bar Skip to main content


Poster

Deep Value Function Networks for Large-Scale Multistage Stochastic Programming Problems

Hyunglip Bae · Jinkyu Lee · Woo Chang Kim · Yongjae Lee

Auditorium 1 Foyer 111

Abstract:

A neural networks-based stagewise decomposition algorithm called Deep Value Function Networks (DVFN) is proposed for large-scale multistage stochastic programming (MSP) problems. Traditional approaches such as nested Benders decomposition and its stochastic variant, stochastic dual dynamic programming (SDDP) approximates value functions as piecewise linear convex functions by gradually accumulating subgradient cuts from dual solutions of stagewise subproblems. Although they have been proven effective for linear problems, nonlinear problems may suffer from the increasing number of subgradient cuts as they proceed. A recently developed algorithm called Value Function Gradient Learning (VFGL) replaced the piecewise linear approximation with parametric function approximation, but its performance heavily depends upon the choice of parametric forms like most of traditional parametric machine learning algorithms did. On the other hand, DVFN approximates value functions using neural networks, which are known to have huge capacity in terms of their functional representations. The art of choosing appropriate parametric form becomes a simple labor of hyperparameter search for neural networks. However, neural networks are non-convex in general, and it would make the learning process unstable. We resolve this issue by using input convex neural networks that guarantee convexity with respect to inputs. We compare DVFN with SDDP and VFGL for solving large-scale linear and nonlinear MSP problems: production optimization and energy planning. Numerical examples clearly indicate that DVFN provide accurate and computationally efficient solutions.

Live content is unavailable. Log in and register to view live content