Skip to yearly menu bar Skip to main content


Poster

Online non-parametric likelihood-ratio estimation by Pearson-divergence functional minimization

Alejandro de la Concha Duarte · Nicolas Vayatis · Argyris Kalogeratos

MR1 & MR2 - Number 33

Abstract: Quantifying the difference between two probability density functions, $p$ and $q$, using available data, is a fundamental problem in Statistics and Machine Learning. A usual approach for addressing this problem is the likelihood-ratio estimation (LRE) between $p$ and $q$, which -to our best knowledge- has been investigated mainly for the offline case. This paper contributes by introducing a new framework for online non-parametric LRE (OLRE) for the setting where pairs of iid observations $(x_t \sim p, x'_t \sim q)$ are observed over time. The non-parametric nature of our approach has the advantage of being agnostic to the forms of $p$ and $q$. Moreover, we capitalize on the recent advances in Kernel Methods and functional minimization to develop an estimator that can be efficiently updated at every iteration. We provide theoretical guarantees for the performance of the OLRE method along with empirical validation in synthetic experiments.

Chat is not available.