Skip to yearly menu bar Skip to main content


To Pool or Not To Pool: Analyzing the Regularizing Effects of Group-Fair Training on Shared Models

Cyrus Cousins · I. Elizabeth Kumar · Suresh Venkatasubramanian

MR1 & MR2 - Number 51
[ ]
Sat 4 May 6 a.m. PDT — 8:30 a.m. PDT


In fair machine learning, one source of performance disparities between groups is overfitting to groups with relatively few training samples.We derive group-specific bounds on the generalization error of welfare-centric fair machine learning that benefit from the larger sample size of the majority group.We do this by considering group-specific Rademacher averages over a restricted hypothesis class, which contains the family of models likely to perform well with respect to a fair learning objective (e.g., a power-mean).Our simulations demonstrate these bounds improve over a na\"ive method, as expected by theory, with particularly significant improvement for smaller group sizes.

Live content is unavailable. Log in and register to view live content