Gradient Descent in RKHS with Importance Labeling

Tomoya Murata · Taiji Suzuki

Keywords: [ Models and Methods ] [ Kernel Methods ]

[ Abstract ]
Tue 13 Apr 6:30 p.m. PDT — 8:30 p.m. PDT


Labeling cost is often expensive and is a fundamental limitation of supervised learning. In this paper, we study importance labeling problem, in which we are given many unlabeled data and select a limited number of data to be labeled from the unlabeled data, and then a learning algorithm is executed on the selected one. We propose a new importance labeling scheme that can effectively select an informative subset of unlabeled data in least squares regression in Reproducing Kernel Hilbert Spaces (RKHS). We analyze the generalization error of gradient descent combined with our labeling scheme and show that the proposed algorithm achieves the optimal rate of convergence in much wider settings and especially gives much better generalization ability in a small noise setting than the usual uniform sampling scheme. Numerical experiments verify our theoretical findings.

Chat is not available.