Skip to yearly menu bar Skip to main content


Poster

Meta-learning Task-specific Regularization Weights for Few-shot Linear Regression

Tomoharu Iwata · Danqi Liao · Yasutoshi Ida


Abstract:

We propose a few-shot learning method for linear regression, which learns how to choose regularization weights from multiple tasks with different feature spaces, and uses the knowledge for unseen tasks. Linear regression is ubiquitous in a wide variety of fields. Although regularization weight tuning is crucial to performance, it is difficult when only a small amount of training data are available. In the proposed method, task-specific regularization weights are generated using a neural network-based model by taking a task-specific training dataset as input, where our model is shared across all tasks. For each task, linear coefficients are optimized by minimizing the squared loss with an L2 regularizer using the generated regularization weights and the training dataset. Our model is meta-learned by minimizing the expected test error of linear regression with the task-specific coefficients using various training datasets. In our experiments using synthetic and real-world datasets, we demonstrate the effectiveness of the proposed method on few-shot regression tasks compared with existing methods.

Live content is unavailable. Log in and register to view live content