Skip to yearly menu bar Skip to main content


Poster

Improved Generalization Bound and Learning of Sparsity Patterns for Data-Driven Low-Rank Approximation

Shinsaku Sakaue · Taihei Oki

Auditorium 1 Foyer 76

Abstract: Learning sketching matrices for fast and accurate low-rank approximation (LRA) has gained increasing attention. Recently, Bartlett, Indyk, and Wagner (COLT 2022) presented a generalization bound for the learning-based LRA. Specifically, for rank-$k$ approximation using an $m \times n$ learned sketching matrix with $s$ non-zeros in each column, they proved an $Õ(nsm)$ bound on the \emph{fat shattering dimension} ($Õ$ hides logarithmic factors). We build on their work and make two contributions.(1) We present a better $Õ(nsk)$ bound ($k \le m$). En route to obtaining this result, we give a low-complexity \emph{Goldberg--Jerrum algorithm} for computing pseudo-inverse matrices, which would be of independent interest.(2) We alleviate an assumption of the previous study that sketching matrices have a fixed sparsity pattern. We prove that learning positions of non-zeros increases the fat shattering dimension only by $O(ns\log n)$. In addition, experiments confirm the practical benefit of learning sparsity patterns.

Live content is unavailable. Log in and register to view live content