Skip to yearly menu bar Skip to main content


Model-based Policy Optimization under Approximate Bayesian Inference

Chaoqi Wang · Yuxin Chen · Kevin Murphy

MR1 & MR2 - Number 27
[ ]
Fri 3 May 8 a.m. PDT — 8:30 a.m. PDT
Oral presentation: Oral: RL & Optimization
Thu 2 May 1:30 a.m. PDT — 2:30 a.m. PDT


Model-based reinforcement learning algorithms~(MBRL) present an exceptional potential to enhance sample efficiency within the realm of online reinforcement learning (RL). Nevertheless, a substantial proportion of prevalent MBRL algorithms fail to adequately address the dichotomy of exploration and exploitation. Posterior sampling reinforcement learning (PSRL) emerges as an innovative strategy adept at balancing exploration and exploitation, albeit its theoretical assurances are contingent upon exact inference. In this paper, we show that adopting the same methodology as in exact PSRL can be suboptimal under approximate inference. Motivated by the analysis, we propose an improved factorization for the posterior distribution of polices by removing the conditional independence between the policy and data given the model. By adopting such a posterior factorization, we further propose a general algorithmic framework for PSRL under approximate inference and a practical instantiation of it. Empirically, our algorithm can surpass baseline methods by a significant margin on both dense rewards and sparse rewards tasks from the Deepmind control suite, OpenAI Gym and Metaworld benchmarks.

Live content is unavailable. Log in and register to view live content