Skip to yearly menu bar Skip to main content


Poster

Adaptive Convergence Rates for Log-Concave Maximum Likelihood

Michael Brennan · Aditya Guntuboyina


Abstract: We study the task of estimating a log-concave density in Rd using the Maximum Likelihood Estimator, known as the log-concave MLE. We show that for every d4, the log-concave MLE attains an \emph{adaptive rate} when the negative logarithm of the underlying density is the maximum of k affine functions, meaning that the estimation error for such a density is significantly lower than the minimax rate for the class of log-concave densities. Specifically, we prove that for such densities, the risk of the log-concave MLE is of order c(k)n4d in terms of the Hellinger squared distance. This result complements the work of (Kim et al. AoS 2018) and Feng et al. (AoS 2021), who addressed the cases d=1 and d{2,3}, respectively. Our proof provides a unified and relatively simple approach for all d1, and is based on techniques from stochastic convex geometry and empirical process theory, which may be of independent interest.

Live content is unavailable. Log in and register to view live content