Skip to yearly menu bar Skip to main content


Oral

Oral: Applications

Auditorium 1
Fri 3 May 6 a.m. PDT — 6:30 a.m. PDT
Abstract:
Chat is not available.


Equivariant bootstrapping for uncertainty quantification in imaging inverse problems

Marcelo Pereyra · Julian Tachella

Scientific imaging problems are often severely ill-posed and hence have significant intrinsic uncertainty.Accurately quantifying the uncertainty in the solutions to such problems is therefore critical for the rigorous interpretation of experimental results as well as for reliably using the reconstructed images as scientific evidence.Unfortunately, existing imaging methods are unable to quantify the uncertainty in the reconstructed images in a way that is robust to experiment replications.This paper presents a new uncertainty quantification methodology based on an equivariant formulation of the parametric bootstrap algorithm that leverages symmetries and invariance properties commonly encountered in imaging problems.Additionally, the proposed methodology is general and can be easily applied with any image reconstruction technique, including unsupervised training strategies that can be trained from observed data alone, thus enabling uncertainty quantification in situations where there is no ground truth data available.We demonstrate the proposed approach with a series of experiments and comparisons with alternative state-of-the-art uncertainty quantification strategies.In all our experiments, the proposed equivariant bootstrap delivers remarkably accurate high-dimensional confidence regions and outperforms the competing approaches in terms of estimation accuracy, uncertainty quantification accuracy, and computing time.These empirical findings are supported by a detailed theoretical analysis of equivariant bootstrap for linear estimators.


Mixed Models with Multiple Instance Learning

Jan Engelmann · Alessandro Palma · Jakub Tomczak · Fabian Theis · Francesco Paolo Casale

Predicting patient features from single-cell data can help identify cellular states implicated in health and disease. Linear models and average cell type expressions are typically favored for this task for their efficiency and robustness, but they overlook the rich cell heterogeneity inherent in single-cell data. To address this gap, we introduce MixMIL, a framework integrating Generalized Linear Mixed Models (GLMM) and Multiple Instance Learning (MIL), upholding the advantages of linear models while modeling cell state heterogeneity. By leveraging predefined cell embeddings, MixMIL enhances computational efficiency and aligns with recent advancements in single-cell representation learning. Our empirical results reveal that MixMIL outperforms existing MIL models in single-cell datasets, uncovering new associations and elucidating biological mechanisms across different domains.