Skip to yearly menu bar Skip to main content


Show Detail Timezone:
America/Los_Angeles
 
Filter Rooms:  

WED 1 MAY
10:30 p.m.
11:45 p.m.
Remarks:
(ends 12:00 AM)

THU 2 MAY
midnight
Invited Talk:
Matthew D. Hoffman
(ends 1:00 AM)
1 a.m.
Break:
(ends 1:30 AM)
1:30 a.m.
Orals 1:30-2:30
[1:30] Conformal Contextual Robust Optimization
[1:30] Near-Optimal Policy Optimization for Correlated Equilibrium in General-Sum Markov Games
[1:30] Model-based Policy Optimization under Approximate Bayesian Inference
[1:30] Online Learning of Decision Trees with Thompson Sampling
(ends 2:30 AM)
3:30 a.m.
Lunch Break on your own:
(ends 5:00 AM)
5 a.m.
Orals 5:00-6:15
[5:00] The sample complexity of ERMs in stochastic convex optimization
[5:00] Stochastic Methods in Variational Inequalities: Ergodicity, Bias and Refinements
[5:00] Absence of spurious solutions far from ground truth: A low-rank analysis with high-order losses
[5:00] Learning-Based Algorithms for Graph Searching Problems
[5:00] Graph Partitioning with a Move Budget
(ends 6:15 AM)
6:15 a.m.
Break:
(ends 6:45 AM)
6:45 a.m.
Orals 6:45-8:00
[6:45] Neural McKean-Vlasov Processes: Distributional Dependence in Diffusion Processes
[6:45] Reparameterized Variational Rejection Sampling
[6:45] Intrinsic Gaussian Vector Fields on Manifolds
[6:45] Generative Flow Networks as Entropy-Regularized RL
[6:45] Robust Approximate Sampling via Stochastic Gradient Barker Dynamics
(ends 8:00 AM)
8 a.m.
Posters 8:00-8:30
(ends 8:30 AM)
9 a.m.
Affinity Event:
(ends 11:00 AM)
10 p.m.
11 p.m.
Mentoring Event (D&I):
(ends 12:00 AM)

FRI 3 MAY
midnight
Invited Talk:
Aaditya Ramdas
(ends 1:00 AM)
1 a.m.
Break:
(ends 1:30 AM)
1:30 a.m.
Orals 1:30-2:30
[1:30] Positivity-free Policy Learning with Observational Data
[1:30] Best-of-Both-Worlds Algorithms for Linear Contextual Bandits
[1:30] Policy Learning for Localized Interventions from Observational Data
[1:30] Exploration via linearly perturbed loss minimisation
(ends 2:30 AM)
2:30 a.m.
Orals 2:30-3:30
[2:30] Membership Testing in Markov Equivalence Classes via Independence Queries
[2:30] Causal Modeling with Stationary Diffusions
[2:30] On the Misspecification of Linear Assumptions in Synthetic Controls
[2:30] General Identifiability and Achievability for Causal Representation Learning
(ends 3:30 AM)
3:30 a.m.
Lunch Break on your own:
(ends 5:00 AM)
Mentoring Event (D&I):
(ends 5:00 AM)
5 a.m.
Test Of Time:
(ends 6:00 AM)
6:15 a.m.
Break:
(ends 6:45 AM)
7 a.m.
Orals 7:00-8:00
[7:00] End-to-end Feature Selection Approach for Learning Skinny Trees
[7:00] Probabilistic Modeling for Sequences of Sets in Continuous-Time
[7:00] Learning to Defer to a Population: A Meta-Learning Approach
[7:00] An Impossibility Theorem for Node Embedding
(ends 8:00 AM)
8 a.m.
Posters 8:00-8:30
(ends 8:30 AM)
10 p.m.
11 p.m.
Mentoring Event (D&I):
(ends 12:00 AM)

SAT 4 MAY
midnight
1 a.m.
Break:
(ends 1:30 AM)
1:30 a.m.
Orals 1:30-2:30
[1:30] Mind the GAP: Improving Robustness to Subpopulation Shifts with Group-Aware Priors
[1:30] Functional Flow Matching
[1:30] Deep Classifier Mimicry without Data Access
[1:30] Multi-Resolution Active Learning of Fourier Neural Operators
(ends 2:30 AM)
2:30 a.m.
Orals 2:30-3:30
[2:30] Transductive conformal inference with adaptive scores
[2:30] Approximate Leave-one-out Cross Validation for Regression with $\ell_1$ Regularizers
[2:30] Failures and Successes of Cross-Validation for Early-Stopped Gradient Descent
[2:30] Testing exchangeability by pairwise betting
(ends 3:30 AM)
3:30 a.m.
Lunch Break on your own:
(ends 5:00 AM)
Mentoring Event (D&I):
(ends 5:00 AM)
5 a.m.
Orals 5:00-6:00
[5:00] Efficient Data Shapley for Weighted Nearest Neighbor Algorithms
[5:00] On Counterfactual Metrics for Social Welfare: Incentives, Ranking, and Information Asymmetry
[5:00] Joint Selection: Adaptively Incorporating Public Information for Private Synthetic Data
[5:00] Is this model reliable for everyone? Testing for strong calibration
(ends 6:00 AM)
6 a.m.
Posters 6:00-8:30
(ends 8:30 AM)