Skip to yearly menu bar Skip to main content


Poster

Sampling-based Safe Reinforcement Learning for Nonlinear Dynamical Systems

Wesley A Suttle · Vipul Sharma · Krishna Chaitanya Kosaraju · Sivaranjani Seetharaman · Ji Liu · Vijay Gupta · Brian Sadler

MR1 & MR2 - Number 55

Abstract:

We develop provably safe and convergent reinforcement learning (RL) algorithms for control of nonlinear dynamical systems, bridging the gap between the hard safety guarantees of control theory and the convergence guarantees of RL theory. Recent advances at the intersection of control and RL follow a two-stage, safety filter approach to enforcing hard safety constraints: model-free RL is used to learn a potentially unsafe controller, whose actions are projected onto safe sets prescribed, for example, by a control barrier function. Though safe, such approaches lose any convergence guarantees enjoyed by the underlying RL methods. In this paper, we develop a single-stage, sampling-based approach to hard constraint satisfaction that learns RL controllers enjoying classical convergence guarantees while satisfying hard safety constraints throughout training and deployment. We validate the efficacy of our approach in simulation, including safe control of a quadcopter in a challenging obstacle avoidance problem, and demonstrate that it outperforms existing benchmarks.

Chat is not available.