Skip to yearly menu bar Skip to main content


A Statistical Analysis of Polyak-Ruppert-Averaged Q-Learning

Xiang Li · Wenhao Yang · Jiadong Liang · Zhihua Zhang · Michael Jordan

Auditorium 1 Foyer 61

Abstract: We study Q-learning with Polyak-Ruppert averaging (a.k.a., averaged Q-learning) in a $\gamma$-discounted MDP under synchronous and tabular settings. Under a Lipschitz condition, we establish a functional central limit theorem (FCLT) for the averaged iteration $\bar{\boldsymbol{Q}}_T$ and show that its standardized partial-sum process converges weakly to a rescaled Brownian motion. The FCLT implies a fully online inference method for RL. Furthermore, we show that $\bar{\boldsymbol{Q}}_T$ is actually a regular asymptotically linear (RAL) estimator for the optimal Q-value function $\boldsymbol{Q}^*$ that has the most efficient influence function. We present a nonasymptotic analysis for the $\ell_{\infty}$ error, $\mathbb{E}\|\bar{\boldsymbol{Q}}_T-\boldsymbol{Q}^*\|_{\infty}$, showing that it matches the instance-dependent lower bound for polynomial step sizes. Similar results are provided for entropy-regularized Q-Learning without the Lipschitz condition.

Live content is unavailable. Log in and register to view live content