Skip to yearly menu bar Skip to main content


Poster

Revisiting Fair-PAC Learning and the Axioms of Cardinal Welfare

Cyrus Cousins

Auditorium 1 Foyer 152

Abstract:

Cardinal objectives serve as intuitive targets in fair machine learning by summarizing utility (welfare) or disutility (malfare) u over g groups.Under standard axioms, all welfare and malfare functions are w-weighted p-power-means, with p ≤ 1 for welfare, or p ≥ 1 for malfare.We show the same under weaker axioms, and also identify stronger axioms that naturally restrict p.It is known that power-mean malfare functions are Lipschitz continuous, and thus statistically easy to estimate or learn.We show that all power means are locally Hölder continuous, i.e., |M(u; w)−M(u′ ; w)| ≤ α λ ∥u − u′∥^α for some λ, α,∥·∥.In particular, λ and 1/α are bounded except as p → 0 or mini wi → 0, and via this analysis we bound the sample complexity of optimizing welfare.This yields a novel concept of fair-PAC learning, wherein welfare functions are only polynomially harder to optimize than malfare functions, except when p ≈ 0 or mini wi ≈ 0, which is exponentially harder.

Live content is unavailable. Log in and register to view live content