Skip to yearly menu bar Skip to main content


Poster

Finite time analysis of temporal difference learning with linear function approximation: Tail averaging and regularisation

Gandharv Patil · Prashanth L.A. · Dheeraj Nagaraj · Doina Precup

Auditorium 1 Foyer 52

Abstract:

We study the finite-time behaviour of the popular temporal difference (TD) learning algorithm, when combined with tail-averaging. We derive finite time bounds on the parameter error of the tail-averaged TD iterate under a step-size choice that does not require information about the eigenvalues of the matrix underlying the projected TD fixed point. Our analysis shows that tail-averaged TD converges at the optimal O (1/t) rate, both in expectation and with high probability. In addition, our bounds exhibit a sharper rate of decay for the initial error (bias), which is an improvement over averaging all iterates. We also propose and analyse a variant of TD that incorporates regularisation, and show that this variant fares favourably in problems with ill-conditioned features.

Live content is unavailable. Log in and register to view live content