Skip to yearly menu bar Skip to main content


Poster

Reinforcement Learning for Adaptive Mesh Refinement

Jiachen Yang · Tarik Dzanic · Brenden Petersen · Jun Kudo · Ketan Mittal · Vladimir Tomov · Jean-Sylvain Camier · Tuo Zhao · Hongyuan Zha · Tzanio Kolev · Robert Anderson · Daniel Faissol

Auditorium 1 Foyer 10

Abstract:

Finite element simulations of physical systems governed by partial differential equations (PDE) crucially depend on adaptive mesh refinement (AMR) to allocate computational budget to regions where higher resolution is required. Existing scalable AMR methods make heuristic refinement decisions based on instantaneous error estimation and thus do not aim for long-term optimality over an entire simulation. We propose a novel formulation of AMR as a Markov decision process and apply deep reinforcement learning (RL) to train refinement policies directly from simulation. AMR poses a new problem for RL as both the state dimension and available action set changes at every step, which we solve by proposing new policy architectures with differing generality and inductive bias. The model sizes of these policy architectures are independent of the mesh size and hence can be deployed on larger simulations than those used at training time. We demonstrate in comprehensive experiments on static function estimation and time-dependent equations that RL policies can be trained on problems without using ground truth solutions, are competitive with a widely-used error estimator, and generalize to larger and unseen test problems.

Live content is unavailable. Log in and register to view live content