Skip to yearly menu bar Skip to main content


Error bounds for any regression model using Gaussian processes with gradient information

Rafael Savvides · Hoang Phuc Hau Luu · Kai Puolamäki

MR1 & MR2 - Number 147
[ ]
Thu 2 May 8 a.m. PDT — 8:30 a.m. PDT


We provide an upper bound for the expected quadratic loss on new data for any regression model. We derive the bound by modelling the underlying function by a Gaussian process (GP). Instead of a single kernel or family of kernels of the same form, we consider all GPs with translation-invariant and continuously twice differentiable kernels having a bounded signal variance and prior covariance of the gradient. To obtain a bound for the expected posterior loss, we present bounds for the posterior variance and squared bias. The squared bias bound depends on the regression model used, which can be arbitrary and not based on GPs. The bounds scale well with data size, in contrast to computing the GP posterior by a Cholesky factorisation of a large matrix. More importantly, our bounds do not require strong prior knowledge as we do not specify the exact kernel form. We validate our theoretical findings by numerical experiments and show that the bounds have applications in uncertainty estimation and concept drift detection.

Live content is unavailable. Log in and register to view live content