Tyler’s M-estimator Through the Lens Of Convex-Concave Programming
Abstract
Tyler’s M-estimator is a popular method for robust covariance estimation. Although Tyler proposed a widely used fixed-point iteration to compute the estimator nearly four decades ago, the theoretical properties of the fixed-point iteration have remained elusive. In particular, no deterministic global convergence rate has been established, despite decades of research. In this paper, we resolve this longstanding question by interpreting the fixed-point iteration as an instance of the convex-concave procedure, and analyzing it through a novel combination of relative smoothness and Riemannian optimization. Our analysis requires us to navigate two distinct geometric frameworks: one induced by the Kullback--Leibler divergence, and the other one arising from the Riemannian geometry on the manifold of positive definite matrices. This interplay between the two geometries also allows us to prove that a regularized variant of the fixed-point iteration converges linearly. Beyond contributing to the theoretical understanding of Tyler’s M-estimator, our analysis demonstrates how the integration of Euclidean and Riemannian perspectives can lead to new insights into the design and analysis of optimization algorithms.