Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

Triple-Q: A Model-Free Algorithm for Constrained Reinforcement Learning with Sublinear Regret and Zero Constraint Violation

Honghao Wei · Xin Liu · Lei Ying


Abstract: This paper presents the first {\em model-free}, {\em simulator-free} reinforcement learning algorithm for Constrained Markov Decision Processes (CMDPs) with sublinear regret and zero constraint violation. The algorithm is named {\em Triple-Q} because it includes three key components: a Q-function (also called action-value function) for the cumulative reward, a Q-function for the cumulative utility for the constraint, and a virtual-Queue that (over)-estimates the cumulative constraint violation. Under Triple-Q, at each step, an action is chosen based on the pseudo-Q-value that is a combination of the three Q'' values. The algorithm updates the reward and utility Q-values with learning rates that depend on the visit counts to the corresponding (state, action) pairs and are periodically reset. In the episodic CMDP setting, Triple-Q achieves ˜O(1δH4S12A12K45) regret\footnote{{\bf Notation:} f(n)=˜O(g(n)) denotes f(n)=O(g(n)logkn) with k>0. The same applies to ˜Ω. R+ denotes non-negative real numbers. [H] denotes the set {1,2,,H}. }, where K is the total number of episodes, H is the number of steps in each episode, S is the number of states, A is the number of actions, and δ is Slater's constant. Furthermore, {Triple-Q} guarantees zero constraint violation, both on expectation and with a high probability, when K is sufficiently large. Finally, the computational complexity of {Triple-Q} is similar to SARSA for unconstrained MDPs, and is computationally efficient.

Chat is not available.