Keywords: [ Theory ] [ Learning Theory ] [ Algorithms, Optimization and Computation Methods ] [ Nonconvex Optimization ]

[
Abstract
]
Oral presentation:
Bandits, Reinforcement Learning / Learning Theory / Sparse Methods

Wed 14 Apr 9:15 a.m. PDT — 10:15 a.m. PDT

Thu 15 Apr 7:30 a.m. PDT — 9:30 a.m. PDT

Wed 14 Apr 9:15 a.m. PDT — 10:15 a.m. PDT

Abstract:
We consider the problem of reconstructing an $n$-dimensional $k$-sparse signal from a set of noiseless magnitude-only measurements. Formulating the problem as an unregularized empirical risk minimization task, we study the sample complexity performance of gradient descent with Hadamard parametrization, which we call Hadamard Wirtinger flow (HWF). Provided knowledge of the signal sparsity $k$, we prove that a single step of HWF is able to recover the support from $k(x^*_{max})^{-2}$ (modulo logarithmic term) samples, where $x^*_{max}$ is the largest component of the signal in magnitude. This support recovery procedure can be used to initialize existing reconstruction methods and yields algorithms with total runtime proportional to the cost of reading the data and improved sample complexity, which is linear in $k$ when the signal contains at least one large component. We numerically investigate the performance of HWF at convergence and show that, while not requiring any explicit form of regularization nor knowledge of $k$, HWF adapts to the signal sparsity and reconstructs sparse signals with fewer measurements than existing gradient based methods.

Chat is not available.