Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

Improved Exploration in Factored Average-Reward MDPs

Mohammad Sadegh Talebi · Anders Jonsson · Odalric Maillard

Keywords: [ Reinforcement Learning ]


Abstract: We consider a regret minimization task under the average-reward criterion in an unknown Factored Markov Decision Process (FMDP). More specifically, we consider an FMDP where the state-action space X and the state-space S admit the respective factored forms of X=ni=1Xi and S=mi=1Si, and the transition and reward functions are factored over X and S. Assuming a known a factorization structure, we introduce a novel regret minimization strategy inspired by the popular UCRL strategy, called DBN-UCRL, which relies on Bernstein-type confidence sets defined for individual elements of the transition function. We show that for a generic factorization structure, DBN-UCRL achieves a regret bound, whose leading term strictly improves over existing regret bounds in terms of the dependencies on the size of \cSi's and the diameter. We further show that when the factorization structure corresponds to the Cartesian product of some base MDPs, the regret of DBN-UCRL is upper bounded by the sum of regret of the base MDPs. We demonstrate, through numerical experiments on standard environments, that DBN-UCRL enjoys a substantially improved regret empirically over existing algorithms that have frequentist regret guarantees.

Chat is not available.