Skip to yearly menu bar Skip to main content


Any-dimensional equivariant neural networks

Eitan Levin · Mateo Diaz

MR1 & MR2 - Number 91
[ ]
Sat 4 May 6 a.m. PDT — 8:30 a.m. PDT


Traditional supervised learning aims to learn an unknown mapping by fitting a function to a set of input-output pairs with a fixed dimension. The fitted function is then defined on inputs of the same dimension. However, in many settings, the unknown mapping takes inputs in any dimension; examples include graph parameters defined on graphs of any size and physics quantities defined on an arbitrary number of particles. We leverage a newly-discovered phenomenon in algebraic topology, called representation stability, to define equivariant neural networks that can be trained with data in a fixed dimension and then extended to accept inputs in any dimension. Our approach is black-box and user-friendly, requiring only the network architecture and the groups for equivariance, and can be combined with any training procedure. We provide a simple open-source implementation of our methods and offer preliminary numerical experiments.

Live content is unavailable. Log in and register to view live content