Skip to yearly menu bar Skip to main content


Poster

Learn to Expect the Unexpected: Probably Approximately Correct Domain Generalization

Vikas Garg · Adam Tauman Kalai · Katrina Ligett · Steven Wu

Keywords: [ Learning Theory and Statistics ] [ Computational Learning Theory ]


Abstract:

Domain generalization is the problem of machine learning when the training data and the test data come from different ``domains'' (data distributions). We propose an elementary theoretical model of the domain generalization problem, introducing the concept of a meta-distribution over domains. In our model, the training data available to a learning algorithm consist of multiple datasets, each from a single domain, drawn in turn from the meta-distribution. We show that our model can capture a rich range of learning phenomena specific to domain generalization for three different settings: learning with Massart noise, learning decision trees, and feature selection. We demonstrate approaches that leverage domain generalization to reduce computational or data requirements in each of these settings. Experiments demonstrate that our feature selection algorithm indeed ignores spurious correlations and improves generalization.

Chat is not available.