Towards Trustworthy Predictions: Theory and Applications of Calibration for Modern AI
Abstract
Calibration—the alignment between predicted probabilities and empirical frequencies—is central to reliability, decision support, and human trust in modern AI systems. Despite rapid progress in the theory and methods of calibration from classification to generative modeling, research on calibration remains fragmented across machine learning, statistics, and theoretical computer science, and applied areas such as medicine. This workshop will unite these communities to clarify foundational questions, align evaluation practices, and explore practical implications for trustworthy AI. Through a tutorial, invited talks, contributed papers and posters, and open problem sessions, we aim to consolidate shared understanding, and build a lasting, cross-disciplinary community around calibration.