Skip to yearly menu bar Skip to main content


Show Detail
Timezone: Africa/Casablanca
 
Filter Rooms:  

SAT 2 MAY
7:30 a.m.
(ends 5:00 PM)
8:30 a.m.
Opening Remarks:
(ends 9:00 AM)
9 a.m.
Invited Talk:
Eric Xing
(ends 10:00 AM)
10 a.m.
Break:
(ends 10:30 AM)
10:30 a.m.
Orals -
Rethinking Probabilistic Circuit Parameter Learning
Spotlights -
Standard Acquisition Is Sufficient for Asynchronous Bayesian Optimization
Learning Hyperparameters via a Data-Emphasized Variational Objective
On the Interplay of Priors and Overparametrization in Bayesian Neural Network Posteriors
Local Inconsistency Resolution: The Interplay between Attention and Control in Probabilistic Models
Hellinger Multimodal Variational Autoencoders
Dendrograms of Mixing Measures for Softmax-Gated Gaussian Mixture of Experts: Consistency Without Model Sweeps
Laplace approximation for Bayesian variable selection via Le Cam's one-step procedure
(ends 11:30 AM)
11:30 a.m.
Orals -
Gaussian Equivalence for Self-Attention: Asymptotic Spectral Analysis of Attention Matrix
Spotlights -
PAC-Bayesian Bounds on Constrained $f$-Entropic Risk Measures
Empirical PAC-Bayes Bounds for Markov Chains
Support Basis: Fast Attention Beyond Bounded Entries
Minimax Generalized Cross-Entropy
Near-Optimal Sample Complexities of Divergence-based S-rectangular Distributionally Robust Reinforcement Learning
Beyond Johnson-Lindenstrauss: Uniform Bounds for Sketched Bilinear Forms
High Effort, Low Gain: Fundamental Limits of Active Learning for Linear Dynamical Systems
(ends 12:30 PM)
12:30 p.m.
Break:
(ends 2:00 PM)
2 p.m.
Orals -
A Proof of Learning Rate Transfer under $\mu$P
Spotlights -
Precise Dynamics of Diagonal Linear Networks: A Unifying Analysis by Dynamical Mean-Field Theory
A projection-based framework for gradient-free and parallel learning
Generalized and Optimal Straight-Through Estimators
GiVA: Gradient-Informed Bases for Vector-Based Adaptation
Tyler’s M-estimator Through the Lens Of Convex-Concave Programming
Gaussian Approximation and Multiplier Bootstrap for Stochastic Gradient Descent
Composable Coresets for Constrained Determinant Maximization and Beyond
(ends 3:00 PM)
3 p.m.
Posters -
(ends 6:00 PM)
6 p.m.
Reception:
(ends 8:00 PM)

SUN 3 MAY
8:30 a.m.
(ends 5:00 PM)
9 a.m.
Invited Talk:
Emma Brunskill
(ends 10:00 AM)
10 a.m.
Break:
(ends 10:30 AM)
10:30 a.m.
Orals -
Creator Incentives in Recommender Systems: A Cooperative Game-Theoretic Approach for Stable and Fair Collaboration in Multi-Agent Bandits
Spotlights -
Pure Exploration with Infinite Answers
A Modularized Framework for Piecewise-Stationary Restless Bandits
Towards Blackwell Optimality: Bellman Optimality Is All You Can Get
Tight Regret Upper and Lower Bounds for Optimistic Hedge in Two-Player Zero-Sum Games
Parameter-Free Dynamic Regret for Unconstrained Linear Bandits
An Information-Geometric Approach to Artificial Curiosity
Learning to Bid in Discriminatory Auctions with Budget Constraints
(ends 11:30 AM)
12:30 p.m.
Break:
(ends 2:00 PM)
2 p.m.
Orals -
Complexity-Aware Deep Symbolic Regression with Robust Risk-Seeking Policy Gradients
Spotlights -
Representation Learning via Non-Contrastive Mutual Information
Why is prompting hard? Understanding prompts on binary sequence predictors
On the Role of Depth in the Expressivity of RNNs
Certifying Reading Comprehension in Large Language Models
In-Context Learning for Discrete Optimal Transport: Can Transformers Sort?
LAMP: Extracting Local Decision Surfaces from Large Language Models
Beyond Binning: Soft Task Reformulation for Deep Regression
(ends 3:00 PM)
3 p.m.
Posters -
(ends 6:00 PM)

MON 4 MAY
8:30 a.m.
(ends 5:00 PM)
9 a.m.
Invited Talk:
Taiji Suzuki
(ends 10:00 AM)
10 a.m.
Break:
(ends 10:30 AM)
10:30 a.m.
Orals -
Beyond Real Data: Synthetic Data through the Lens of Regularization
Spotlights -
Archetypal Graph Generative Models: Explainable and Identifiable Communities via Anchor-Dominant Convex Hulls
Longitudinal Flow Matching for Trajectory Modeling
Denoising Score Matching with Random Features: Insights on Diffusion Models From Precise Learning Curves
Simplex-to-Euclidean Bijections for Categorical Flow Matching
A Continuous Time Markov Chain Framework for Insertion Language Models
Explicit Density Approximation for Neural Implicit Samplers Using a Bernstein-Based Convex Divergence
High-Performance Self-Supervised Representation Learning by Joint Training of Flow Matching and Representation Encoder
(ends 11:30 AM)
11:30 a.m.
Orals -
Orthogonal Representation Learning for Estimating Causal Quantities
Spotlights -
On the Number of Conditional Independence Tests in Constraint-based Causal Discovery
A Semi-Supervised Kernel Two-Sample Test
DP-SPRT: Differentially Private Sequential Probability Ratio Tests
On the Intrinsic Dimensions of Data in Kernel Learning
RealStats: A Rigorous Real-Only Statistical Framework for Fake Image Detection
The Good, the Bad, and the Sampled: a No-Regret Approach to Safe Online Classification
Distribution free M-estimation
(ends 12:30 PM)
12:30 p.m.
Break:
(ends 2:00 PM)
2 p.m.
Orals -
Panprediction: Optimal Predictions for Any Downstream Task and Loss
Spotlights -
OEUVRE: OnlinE Unbiased Variance-Reduced loss Estimation
On the calibration of survival models with competing risks
Patch2Loc: Learning to Localize Patches for Unsupervised Brain Lesion Detection
AMRM-Pure: Semantic-Preserving Adversarial Purification
Scalable Utility-Aware Multiclass Calibration
Near Optimal Dropout-Robust Sortion
SetPINNs: Set-based Physics-informed Neural Networks
(ends 3:00 PM)
3 p.m.
Posters -
(ends 6:00 PM)