Skip to yearly menu bar Skip to main content


Poster

Scalable Meta-Learning with Gaussian Processes

Petru Tighineanu · Lukas Grossberger · Paul Baireuther · Kathrin Skubch · Stefan Falkner · Julia Vinogradska · Felix Berkenkamp

Multipurpose Room 2 - Number 169

Abstract:

Meta-learning is a powerful approach that exploits historical data to quickly solve new tasks from the same distribution. In the low-data regime, methods based on the closed-form posterior of Gaussian processes (GP) together with Bayesian optimization have achieved high performance. However, these methods are either computationally expensive or introduce assumptions that hinder a principled propagation of uncertainty between task models. This may disrupt the balance between exploration and exploitation during optimization. In this paper, we develop ScaML-GP, a modular GP model for meta-learning that is scalable in the number of tasks. Our core contribution is carefully designed multi-task kernel that enables hierarchical training and task scalability. Conditioning ScaML-GP on the meta-data exposes its modular nature yielding a test-task prior that combines the posteriors of meta-task GPs. In synthetic and real-world meta-learning experiments, we demonstrate that ScaML-GP can learn efficiently both with few and many meta-tasks.

Live content is unavailable. Log in and register to view live content