Skip to yearly menu bar Skip to main content


Poster

Copula Based Trainable Calibration Error Estimator of Multi-Label Classification with Label Interdependencies

Arkapal Panda


Abstract: A key challenge in calibrating Multi-Label Classification(MLC) problems is to consider the interdependencies among labels. To address this, in this research we propose an unbiased, differentiable, trainable calibration error estimator for MLC problems by using Copula. Unlike other methods for calibrating MLC tasks that focus on marginal calibration, this novel estimator takes label interdependencies into account and enables us to tackle the strictest notion of calibration that is canonical calibration. To design the estimator, we begin by leveraging the kernel trick to construct a continuous distribution from the discrete label space. Then we take a semiparametric approach to construct the estimator where the marginals are modeled non-parametrically and the Copula is modeled parametrically. Theoretically we show that our estimator is unbiased and converges to true LpLp calibration error. We also use our estimator as a regularizer at the time of training and observe that it reduces calibration error on test datasets significantly. Experiments on a well established dataset endorses our claims.

Live content is unavailable. Log in and register to view live content