Skip to yearly menu bar Skip to main content


Poster

Sample Complexity of Kernel-Based Q-Learning

Sing-Yuan Yeh · Fu-Chieh Chang · Chang-Wei Yueh · Pei-Yuan Wu · Alberto Bernacchia · Sattar Vakili

Auditorium 1 Foyer 51

Abstract:

Modern reinforcement learning (RL) often faces an enormous action-state space. Existing analytical results are typically for settings with a small number of action-states, or simple models such as linearly modeled Q functions. To derive statistically efficient RL policies handling large action-tate spaces, with more general Q functions, some recent works have considered nonlinear function approximation using kernel ridge regression. In this work, we derive sample complexities for kernel-based Q-learning when a generative model exists. We propose a non-parametric Q-learning algorithm which finds an ε-optimal policy in an arbitrarily large scale discounted MDP. The sample complexity of the proposed algorithm is order optimal with respect to ε and the complexity of the kernel (in terms of its information gain or effective dimension). To the best of our knowledge, this is the first result showing a finite sample complexity under such a general model.

Live content is unavailable. Log in and register to view live content