Skip to yearly menu bar Skip to main content


Poster

Adaptive Convergence Rates for Log-Concave Maximum Likelihood

Gil Kur · Aditya Guntuboyina

Hall A-E 130

Abstract: We study the task of estimating a log-concave density in $\mathbb{R}^d$ using the Maximum Likelihood Estimator, known as the log-concave MLE. We show that for every $d \geq 4$, the log-concave MLE attains an \emph{adaptive rate} when the negative logarithm of the underlying density is the maximum of $k$ affine functions, meaning that the estimation error for such a density is significantly lower than the minimax rate for the class of log-concave densities. Specifically, we prove that for such densities, the risk of the log-concave MLE is of order $c(k) \cdot n^{-\frac{4}{d}}$ in terms of the Hellinger squared distance. This result complements the work of (Kim et al. AoS 2018) and Feng et al. (AoS 2021), who addressed the cases $d = 1$ and $d \in \{2,3\}$, respectively. Our proof provides a unified and relatively simple approach for all $d \geq 1$, and is based on techniques from stochastic convex geometry and empirical process theory, which may be of independent interest.

Live content is unavailable. Log in and register to view live content