Skip to yearly menu bar Skip to main content


Poster

Sparse and Faithful Explanations Without Sparse Models

Yiyang Sun · Zhi Chen · Vittorio Orlandi · Tong Wang · Cynthia Rudin

MR1 & MR2 - Number 95
[ ]
[ Poster
Thu 2 May 8 a.m. PDT — 8:30 a.m. PDT

Abstract:

Even if a model is not globally sparse, it is possible for decisions made from that model to be accurately and faithfully described by a small number of features. For instance, an application for a large loan might be denied to someone because they have no credit history, which overwhelms any evidence towards their creditworthiness. In this work, we introduce the Sparse Explanation Value (SEV), a new way of measuring sparsity in machine learning models. In the loan denial example above, the SEV is 1 because only one factor is needed to explain why the loan was denied. SEV is a measure of decision sparsity rather than overall model sparsity, and we are able to show that many machine learning models -- even if they are not sparse -- actually have low decision sparsity, as measured by SEV. SEV is defined using movements over a hypercube, allowing SEV to be defined consistently over various model classes, with movement restrictions reflecting real-world constraints. Our algorithms reduce SEV without sacrificing accuracy, providing sparse and completely faithful explanations, even without globally sparse models.

Chat is not available.