Skip to yearly menu bar Skip to main content


Poster

Reliable and Scalable Variable Importance Estimation via Warm-start and Early Stopping

Liu Yang · Garvesh Raskutti


Abstract: As opaque black-box predictive models such as neural networks become more prevalent, the need to develop interpretations for these models is of great interest. The concept of variable importance is an interpretability measure that applies to any predictive model and assesses how much a variable or set of variables improves prediction performance. When the number of variables is large, estimating variable importance presents a significant challenge because re-training neural networks or other black-box algorithms requires significant additional computation. In this paper, we address this challenge for algorithms using gradient descent and gradient boosting (e.g. neural networks, gradient-boosted decision trees). By using the ideas of early stopping of gradient-based methods in combination with warm-start using the dropout method, we develop a scalable method to estimate variable importance for any algorithm that can be expressed as an iterative kernel update equation. Importantly, we provide theoretical guarantees by using the theory for early stopping of kernel-based methods for neural networks with sufficient large width and gradient-boosting decision trees that use symmetric tree as a weaker learner. We also demonstrate the efficacy of our methods through simulations and a real data example which illustrates the computational benefit of early stopping rather than fully re-training the model as well as the increased accuracy of taking initial steps from the dropout solution.

Live content is unavailable. Log in and register to view live content