Poster

Fast Adaptation with Linearized Neural Networks

Wesley Maddox · Shuai Tang · Pablo Moreno · Andrew Gordon Wilson · Andreas Damianou

Keywords: [ Models and Methods ] [ Gaussian Processes ]

[ Abstract ]
Tue 13 Apr 2 p.m. PDT — 4 p.m. PDT

Abstract:

The inductive biases of trained neural networks are difficult to understand and, consequently, to adapt to new settings. We study the inductive biases of linearizations of neural networks, which we show to be surprisingly good summaries of the full network functions. Inspired by this finding, we propose a technique for embedding these inductive biases into Gaussian processes through a kernel designed from the Jacobian of the network. In this setting, domain adaptation takes the form of interpretable posterior inference, with accompanying uncertainty estimation. This inference is analytic and free of local optima issues found in standard techniques such as fine-tuning neural network weights to a new task. We develop significant computational speed-ups based on matrix multiplies, including a novel implementation for scalable Fisher vector products. Our experiments on both image classification and regression demonstrate the promise and convenience of this framework for transfer learning, compared to neural network fine-tuning.

Chat is not available.