Poster
Adaptive Convergence Rates for Log-Concave Maximum Likelihood
Michael Brennan · Aditya Guntuboyina
[
Abstract
]
Abstract:
We study the task of estimating a log-concave density in using the Maximum Likelihood Estimator, known as the log-concave MLE. We show that for every , the log-concave MLE attains an \emph{adaptive rate} when the negative logarithm of the underlying density is the maximum of affine functions, meaning that the estimation error for such a density is significantly lower than the minimax rate for the class of log-concave densities. Specifically, we prove that for such densities, the risk of the log-concave MLE is of order in terms of the Hellinger squared distance. This result complements the work of (Kim et al. AoS 2018) and Feng et al. (AoS 2021), who addressed the cases and , respectively. Our proof provides a unified and relatively simple approach for all , and is based on techniques from stochastic convex geometry and empirical process theory, which may be of independent interest.
Live content is unavailable. Log in and register to view live content