Workshops
Towards Trustworthy Predictions: Theory and Applications of Calibration for Modern AI
Calibration—the alignment between predicted probabilities and empirical frequencies—is central to reliability, decision support, and human trust in modern AI systems. Despite rapid progress in the theory and methods of calibration from classification to generative modeling, research on calibration remains fragmented across machine learning, statistics, and theoretical computer science, and applied areas such as medicine. This workshop will unite these communities to clarify foundational questions, align evaluation practices, and explore practical implications for trustworthy AI. Through a tutorial, invited talks, contributed papers and posters, and open problem sessions, we aim to consolidate shared understanding, and build a lasting, cross-disciplinary community around calibration.
Causality in the Age of AI Scaling
Reasoning about interventions, the core of causality, is fundamental to solving many of modern AI's most pressing challenges, including trustworthiness, reliability, explainability, and out-of-distribution generalization. Yet, recent AI breakthroughs have been overwhelmingly driven by scaling models on simple predictive objectives without explicit causal modeling, such as next-word prediction for Large Language Models or denoising prediction for diffusion models. This success raises a critical question for the community: Can causal abilities emerge from scale alone, and if not, what can explicit causal modeling bring that scale cannot? This workshop aims to understand this question and explore the potential synergy between scaling predictive methods and formal causal modeling to build the next generation of AI.