Abstract:
We provide a general constrained risk inequality that applies to arbitrary
non-decreasing losses, extending a result of Brown and Low
[\emph{Ann.~Stat.~1996}]. Given two distributions P0P0 and P1P1, we find
a lower bound for the risk of estimating a parameter θ(P1)θ(P1) under
P1P1 given an upper bound on the risk of estimating the parameter
θ(P0)θ(P0) under P0P0. The inequality is a useful pedagogical tool, as
its proof relies only on the Cauchy-Schwartz inequality, it applies to
general losses, and it transparently gives risk lower bounds on
super-efficient and adaptive estimators.