Skip to yearly menu bar Skip to main content


Poster

Approximate Leave-one-out Cross Validation for Regression with l1 Regularizers

Arnab Auddy · Haolin Zou · Kamiar Rahnama Rad · Arian Maleki

Multipurpose Room 1 - Number 7
[ ]
 
Oral presentation: Oral: Statistics
Sat 4 May 2:30 a.m. PDT — 3:30 a.m. PDT

Abstract: The out-of-sample error (OO) is the main quantity of interest in risk estimation and model selection. Leave-one-out cross validation (LO) offers a (nearly) distribution-free yet computationally demanding approach to estimate OO. Recent theoretical work showed that approximate leave-one-out cross validation (ALO) is an efficient estimate of LO (and OO) for generalized linear models with differentiable regularizers. For problems involving non-differentiable regularizers, despite significant empirical evidence, the theoretical understanding of ALO's error remains unknown. In this paper, we present a novel theory for a wide class of problems in the generalized linear model family with non-differentiable regularizers. We bound the error \(|{\rm ALO}-{\rm LO}|\) in terms of intuitive metrics such as the size of leave-\(i\)-out perturbations in active sets, sample size $n$, number of features $p$ and signal-to-noise ratio (SNR). As a consequence, for the elastic-net problem, we show that $|{\rm ALO}-{\rm LO}| \xrightarrow{p\rightarrow \infty} 0$ while $n/p$ and SNR remain bounded.

Live content is unavailable. Log in and register to view live content