Poster

Benchmarking Simulation-Based Inference

Jan-Matthis Lueckmann · Jan Boelts · David Greenberg · Pedro Goncalves · Jakob Macke

Keywords: [ Probabilistic Methods ] [ Approximate Inference ]

[ Abstract ]
Thu 15 Apr 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Recent advances in probabilistic modelling have led to a large number of simulation-based inference algorithms which do not require numerical evaluation of likelihoods. However, a public benchmark with appropriate performance metrics for such 'likelihood-free' algorithms has been lacking. This has made it difficult to compare algorithms and identify their strengths and weaknesses. We set out to fill this gap: We provide a benchmark with inference tasks and suitable performance metrics, with an initial selection of algorithms including recent approaches employing neural networks and classical Approximate Bayesian Computation methods. We found that the choice of performance metric is critical, that even state-of-the-art algorithms have substantial room for improvement, and that sequential estimation improves sample efficiency. Neural network-based approaches generally exhibit better performance, but there is no uniformly best algorithm. We provide practical advice and highlight the potential of the benchmark to diagnose problems and improve algorithms. The results can be explored interactively on a companion website. All code is open source, making it possible to contribute further benchmark tasks and inference algorithms.

Chat is not available.