Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

Feasible Q-Learning for Average Reward Reinforcement Learning

Ying Jin · Ramki Gummadi · Zhengyuan Zhou · Jose Blanchet

MR1 & MR2 - Number 45

Abstract: Average reward reinforcement learning (RL) provides a suitable framework for capturing the objective (i.e.\ long-run average reward) for continuing tasks, where there is often no natural way to identify a discount factor.However, existing average reward RL algorithms with sample complexity guarantees are not feasible, as they take as input the (unknown) mixing time of the Markov decision process (MDP). In this paper, we make initial progress towards addressing this open problem. We design a feasible average-reward Q-learning framework that requires no knowledge of any problem parameter as input. Our framework is based on discounted Q-learning, while we dynamically adapt the discount factor (and hence the effective horizon) to progressively approximate the average reward. In the synchronous setting, we solve three tasks: (i) learn a policy that is ϵ-close to optimal, (ii) estimate optimal average reward with ϵ-accuracy, and (iii) estimate the bias function (similar to Q-function in discounted case) with ϵ-accuracy. We show that with carefully designed adaptation schemes, (i) can be achieved with ˜O(SAt8mixϵ8) samples, (ii) with ˜O(SAt5mixϵ5) samples, and (iii) with ˜O(SABϵ9) samples, where tmix is the mixing time, and B>0 is an MDP-dependent constant. To our knowledge, we provide the first finite-sample guarantees that are polynomial in S,A,tmix,ϵ for a feasible variant of Q-learning.That said, the sample complexity bounds have tremendous room for improvement, which we leave for the community's best minds. Preliminary simulations verify that our framework is effective without prior knowledge of parameters as input.

Chat is not available.