Skip to yearly menu bar Skip to main content


Poster

Monotone Operator Theory-Inspired Message Passing for Learning Long-Range Interaction on Graphs

Justin Baker · Qingsong Wang · Martin Berzins · Thomas Strohmer · Bao Wang

MR1 & MR2 - Number 154
[ ]
Fri 3 May 8 a.m. PDT — 8:30 a.m. PDT

Abstract:

Learning long-range interactions (LRI) between distant nodes is crucial for many graph learning tasks. Predominant graph neural networks (GNNs) rely on local message passing and struggle to learn LRI. In this paper, we propose DRGNN to learn LRI leveraging monotone operator theory. DRGNN contains two key components: (1) we use a full node similarity matrix beyond adjacency matrix -- drawing inspiration from the personalized PageRank matrix -- as the aggregation matrix for message passing, and (2) we implement message-passing on graphs using Douglas-Rachford splitting to circumvent prohibitive matrix inversion. We demonstrate that DRGNN surpasses various advanced GNNs, including Transformer-based models, on several benchmark LRI learning tasks arising from different application domains, highlighting its efficacy in learning LRI. Code is available at \url{https://github.com/Utah-Math-Data-Science/PR-inspired-aggregation}.

Live content is unavailable. Log in and register to view live content