Workshops
Causality in the Age of AI Scaling
Reasoning about interventions, the core of causality, is fundamental to solving many of modern AI's most pressing challenges, including trustworthiness, reliability, explainability, and out-of-distribution generalization. Yet, recent AI breakthroughs have been overwhelmingly driven by scaling models on simple predictive objectives without explicit causal modeling, such as next-word prediction for Large Language Models or denoising prediction for diffusion models. This success raises a critical question for the community: Can causal abilities emerge from scale alone, and if not, what can explicit causal modeling bring that scale cannot? This workshop aims to understand this question and explore the potential synergy between scaling predictive methods and formal causal modeling to build the next generation of AI.
OPTIMAL: (O)ptimisation and (P)os(T)-Bayesian (I)nference in (MA)chine (L)earning
The aim of probabilistic machine learning is to find accurate representations of our uncertain beliefs about the world and use them to make better informed decisions. This workshop brings together post-Bayesian approaches to inference and optimisation-based perspectives on uncertainty and decision-making. Post-Bayesian methods address the limitations of classical Bayesian inference by developing alternative inferential principles that remain robust in modern machine-learning settings, where standard modelling assumptions may be violated. Complementing this view, optimisation-based approaches treat inference and decision-making as problems of optimising functionals of probability distributions, providing a unifying framework for both learning probabilistic representations and acting upon them. This workshop welcomes all theoretical and methodological work on how best to represent, find and use probabilistic beliefs about the world.
Towards Trustworthy Predictions: Theory and Applications of Calibration for Modern AI
Calibration—the alignment between predicted probabilities and empirical frequencies—is central to reliability, decision support, and human trust in modern AI systems. Despite rapid progress in the theory and methods of calibration from classification to generative modeling, research on calibration remains fragmented across machine learning, statistics, and theoretical computer science, and applied areas such as medicine. This workshop will unite these communities to clarify foundational questions, align evaluation practices, and explore practical implications for trustworthy AI. Through a tutorial, invited talks, contributed papers and posters, and open problem sessions, we aim to consolidate shared understanding, and build a lasting, cross-disciplinary community around calibration.