Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

A Safe Exploration Approach to Constrained Markov Decision Processes

Markus Frey


Abstract: We consider discounted infinite-horizon constrained Markov decision processes (CMDPs), where the goal is to find an optimal policy that maximizes the expected cumulative reward while satisfying expected cumulative constraints. Motivated by the application of CMDPs in online learning for safety-critical systems, we focus on developing a model-free and simulator-free algorithm that ensures constraint satisfaction during learning. To this end, we employ the LB-SGD algorithm proposed in \citep{usmanova2022log}, which utilizes an interior-point approach based on the log-barrier function of the CMDP. Under the commonly assumed conditions of relaxed Fisher non-degeneracy and bounded transfer error in policy parameterization, we establish the theoretical properties of the LB-SGD algorithm. In particular, unlike existing CMDP approaches that ensure policy feasibility only upon convergence, the LB-SGD algorithm guarantees feasibility throughout the learning process and converges to the ε-optimal policy with a sample complexity of ˜O(ε6). Compared to the state-of-the-art policy gradient-based algorithm, C-NPG-PDA, the LB-SGD algorithm requires an additional O(ε2) samples to ensure policy feasibility during learning with the same Fisher non-degenerate parameterization.

Live content is unavailable. Log in and register to view live content