Information-theoretic error bounds for source localization in neural sensing
Leighton Barnes · Yuxin Guo · Alex Dytso · Pulkit Grover
Abstract
We formulate a point-source localization problem in $d$ dimensions, where a source inside the ball of radius $R$ emits a signal that is picked up by various sensors located at the surface of the ball. For $d=3$, this can model problems in neural sensing, where a net of electroencephalogram (EEG) or magnetoencephalogram (MEG) sensors try to locate the source of a distinct neural event such as a seizure. For a power law decay model with exponent $\alpha>0$ for the sensors, we obtain a lower bound on the minimax risk for localizing the source that is asymptotically $\frac{d^2\sigma^2R^{2\alpha+2}}{n\alpha^2PK}$ under mean-squared error loss, where $\sigma^2$ is the noise variance, $P$ is the signal power, $K$ is the number of sensors, and $n$ is the number of independent measurements. In the case $d\leq 2(\alpha+1)$ with uniformly distributed sensor locations, we then give a matching upper bound, including getting the exact constant correct, for the asymptotic minimax rate in a neighborhood of the origin. We show that there is a phase transition at $d=2(\alpha+2)$, above which a certain Fisher information quantity is minimized at the boundary of the ball, and below which it is minimized at the origin. At the critical dimension $d=2(\alpha+2)$, the Fisher information is constant throughout the entire parameter space. For the special case $d=3$, we supplement and compare this information-theoretic analysis with a simulated forward EEG model that uses a realistic head model derived from population-averaged magnetic resonance imaging data.
Successful Page Load