Skip to yearly menu bar Skip to main content


Poster

Adapting to Online Distribution Shifts in Deep Learning: A Black-Box Approach

Xinshuai Dong · Pedro Mercado · Danqi Liao · Tianyi Zhou


Abstract: We study the well-motivated problem of online distribution shift in which the data arrive in batches and the distribution of each batch can change arbitrarily over time. Since the shifts can be large or small, abrupt or gradual, the length of the relevant historical data to learn from may vary over time, which poses a major challenge in designing algorithms that can automatically adapt to the best "attention span'' while remaining computationally efficient. We propose a meta-algorithm that takes any network architecture and any Online Learner (OL) algorithm as input and produces a new algorithm which provably enhances the performance of the given OL under non-stationarity. Our algorithm is efficient (it requires maintaining only O(logT)O(logT) OL instances) and adaptive (it automatically chooses OL instances with the ideal "attention'' length at every timestamp). Experiments on various real-world datasets across text and image modalities show that our method consistently improves the accuracy of user specified OL algorithms for classification tasks. Key novel algorithmic ingredients include a multi-resolution instance design inspired by wavelet theory and a cross-validation-through-time technique. Both could be of independent interest.

Live content is unavailable. Log in and register to view live content