Skip to yearly menu bar Skip to main content


Poster

Mitigating Underfitting in Learning to Defer with Consistent Losses

Shuqi Liu · Yuzhou Cao · Qiaozhen Zhang · Lei Feng · Bo An

Multipurpose Room 1 - Number 88

Abstract:

Learning to defer (L2D) allows the classifier to defer its prediction to an expert for safer predictions, by balancing the system's accuracy and extra costs incurred by consulting the expert. Various loss functions have been proposed for L2D, but they were shown to cause the underfitting of trained classifiers when extra consulting costs exist, resulting in degraded performance. In this paper, we propose a novel loss formulation that can mitigate the underfitting issue while remaining the statistical consistency. We first show that our formulation can avoid a common characteristic shared by most existing losses, which has been shown to be a cause of underfitting, and show that it can be combined with the representative losses for L2D to enhance their performance and yield consistent losses. We further study the regret transfer bounds of the proposed losses and experimentally validate its improvements over existing methods.

Live content is unavailable. Log in and register to view live content