Skip to yearly menu bar Skip to main content


Poster

Precision/Recall on Imbalanced Test Data

Hongwei Shang · Jean-Marc Langlois · Kostas Tsioutsiouliklis · Changsung Kang

Auditorium 1 Foyer 143

Abstract:

In this paper we study the problem of estimating accurately the precision and recall for binary classification when the classes are imbalanced and only a limited number of human labels are available. In this case, one common strategy is to over-sample the small positive class predicted by the classifier. Rather than random sampling where the values in a confusion matrix are observations coming from a multinomial distribution, we over-sample the minority positive class predicted by the classifier, resulting in two independent binomial distributions. But how much should we over-sample? And what confidence/credible intervals can we deduce based on our over-sampling?We provide formulas for (1) the confidence intervals of the adjusted precision/recall after over-sampling; (2) Bayesian credible intervals of adjusted precision/recall by obtaining their predictive posterior distribution. For precision, the higher the over-sampling rate, the narrower the confidence/credible interval. For recall, there exists an optimal over-sampling ratio, which minimizes the width of the confidence/credible interval. Also, we present experiments on synthetic data and real data to demonstrate the capability of our method to construct accurate intervals. Finally, we demonstrate how we can apply our techniques to a quality monitoring system. We find the size of the smallest editorial test for a set of classifiers given that the precision and recall be within 5\% error rate, respectively.

Live content is unavailable. Log in and register to view live content