Skip to yearly menu bar Skip to main content


Poster

Near-optimal Policy Optimization Algorithms for Learning Adversarial Linear Mixture MDPs

Jiafan He · Dongruo Zhou · Quanquan Gu


Abstract: Learning Markov decision processes (MDPs) in the presence of the adversary is a challenging problem in reinforcement learning (RL). In this paper, we study RL in episodic MDPs with adversarial reward and full information feedback, where the unknown transition probability function is a linear function of a given feature mapping, and the reward function can change arbitrarily episode by episode. We propose an optimistic policy optimization algorithm POWERS and show that it can achieve ˜O(dHT)~O(dHT) regret, where HH is the length of the episode, TT is the number of interaction with the MDP, and dd is the dimension of the feature mapping. Furthermore, we also prove a matching lower bound of ˜Ω(dHT)~Ω(dHT) up to logarithmic factors. Our key technical contributions are two-fold: (1) a new value function estimator based on importance weighting; and (2) a tighter confidence set for the transition kernel. They together lead to the nearly minimax optimal regret.

Chat is not available.