Skip to yearly menu bar Skip to main content


Poster

Reinforcement Learning for Constrained Markov Decision Processes

Ather Gattami · Qinbo Bai · Vaneet Aggarwal

Keywords: [ Algorithms ] [ Reinforcement Learning ] [ Semi-Supervised Learning ] [ Algorithms -> Classification; Algorithms -> Meta-Learning; Applications ] [ Object Recognition ]


Abstract: In this paper, we consider the problem of optimization and learning for constrained and multi-objective Markov decision processes, for both discounted rewards and expected average rewards. We formulate the problems as zero-sum games where one player (the agent) solves a Markov decision problem and its opponent solves a bandit optimization problem, which we here call Markov-Bandit games. We extend $Q$-learning to solve Markov-Bandit games and show that our new $Q$-learning algorithms converge to the optimal solutions of the zero-sum Markov-Bandit games, and hence converge to the optimal solutions of the constrained and multi-objective Markov decision problems. We provide numerical examples where we calculate the optimal policies and show by simulations that the algorithm converges to the calculated optimal policies. To the best of our knowledge, this is the first time Q-learning algorithms guarantee convergence to optimal stationary policies for the multi-objective Reinforcement Learning problem with discounted and expected average rewards, respectively.

Chat is not available.