Skip to yearly menu bar Skip to main content


Poster

Uncertainty Matters: Stable Conclusions under Unstable Assessment of Fairness Results

Ainhize Barrainkua · Paula Gordaliza · Jose A Lozano · Novi Quadrianto

MR1 & MR2 - Number 82
[ ]
[ Poster
Thu 2 May 8 a.m. PDT — 8:30 a.m. PDT

Abstract:

Recent studies highlight the effectiveness of Bayesian methods in assessing algorithm performance, particularly in fairness and bias evaluation. We present Uncertainty Matters, a multi-objective uncertainty-aware algorithmic comparison framework. In fairness focused scenarios, it models sensitive group confusion matrices using Bayesian updates and facilitates joint comparison of performance (e.g., accuracy) and fairness metrics (e.g., true positive rate parity). Our approach works seamlessly with common evaluation methods like K-fold cross-validation, effectively addressing dependencies among the K posterior metric distributions. The integration of correlated information is carried out through a procedure tailored to the classifier's complexity. Experiments demonstrate that the insights derived from algorithmic comparisons employing the Uncertainty Matters approach are more informative, reliable, and less influenced by particular data partitions. Code for the paper is publicly available at \url{https://github.com/abarrainkua/UncertaintyMatters}.

Chat is not available.