Poster
Theory of Agreement-on-the-Line in Linear Models and Gaussian Data
Asfandyar Azhar · Kun Zhang · Alexander Hauptmann
Under distribution shifts, deep networks exhibit a surprising phenomenon: in-distribution (ID) versus out-of-distribution (OOD) accuracy is often strongly linearly correlated across architectures and hyperparameters, accompanied by the same linear trend in ID versus OOD agreement between the predictions of any pair of such independently trained networks. The latter phenomenon called agreement-on-the-line'' enables precise unlabeled OOD performance estimation of models. In this work, we discover that agreement-on-the-line emerges even in linear classifiers over Gaussian class conditional distributions. We provide theoretical guarantees for this phenomenon in classifiers optimized via randomly initialized gradient descent, approximated by linear interpolations between random vectors and the Bayes-optimal classifier. Next, we prove a lower bound on the residual of the correlation between ID versus OOD agreement that grows proportionally with the residual of accuracy. Real-world experiments on CIFAR10C shifts, validate our findings and the broader relevance of our theoretical framework.
Live content is unavailable. Log in and register to view live content