Show Detail |
Timezone: America/Los_Angeles |
Filter Rooms:
WED 1 MAY
10:30 p.m.
(ends 8:00 AM)
11:45 p.m.
THU 2 MAY
midnight
1 a.m.
1:30 a.m.
Orals 1:30-2:30
[1:30]
Conformal Contextual Robust Optimization
[1:30]
Near-Optimal Policy Optimization for Correlated Equilibrium in General-Sum Markov Games
[1:30]
Model-based Policy Optimization under Approximate Bayesian Inference
[1:30]
Online Learning of Decision Trees with Thompson Sampling
(ends 2:30 AM)
2:30 a.m.
3:30 a.m.
5 a.m.
Orals 5:00-6:15
[5:00]
The sample complexity of ERMs in stochastic convex optimization
[5:00]
Stochastic Methods in Variational Inequalities: Ergodicity, Bias and Refinements
[5:00]
Absence of spurious solutions far from ground truth: A low-rank analysis with high-order losses
[5:00]
Learning-Based Algorithms for Graph Searching Problems
[5:00]
Graph Partitioning with a Move Budget
(ends 6:15 AM)
6:15 a.m.
6:45 a.m.
Orals 6:45-8:00
[6:45]
Neural McKean-Vlasov Processes: Distributional Dependence in Diffusion Processes
[6:45]
Reparameterized Variational Rejection Sampling
[6:45]
Intrinsic Gaussian Vector Fields on Manifolds
[6:45]
Generative Flow Networks as Entropy-Regularized RL
[6:45]
Robust Approximate Sampling via Stochastic Gradient Barker Dynamics
(ends 8:00 AM)
8 a.m.
Posters 8:00-8:30
(ends 8:30 AM)
9 a.m.
10 p.m.
(ends 8:00 AM)
11 p.m.
FRI 3 MAY
midnight
1 a.m.
1:30 a.m.
Orals 1:30-2:30
[1:30]
Positivity-free Policy Learning with Observational Data
[1:30]
Best-of-Both-Worlds Algorithms for Linear Contextual Bandits
[1:30]
Policy Learning for Localized Interventions from Observational Data
[1:30]
Exploration via linearly perturbed loss minimisation
(ends 2:30 AM)
2:30 a.m.
Orals 2:30-3:30
[2:30]
Membership Testing in Markov Equivalence Classes via Independence Queries
[2:30]
Causal Modeling with Stationary Diffusions
[2:30]
On the Misspecification of Linear Assumptions in Synthetic Controls
[2:30]
General Identifiability and Achievability for Causal Representation Learning
(ends 3:30 AM)
3:30 a.m.
5 a.m.
6 a.m.
6:15 a.m.
7 a.m.
Orals 7:00-8:00
[7:00]
End-to-end Feature Selection Approach for Learning Skinny Trees
[7:00]
Probabilistic Modeling for Sequences of Sets in Continuous-Time
[7:00]
Learning to Defer to a Population: A Meta-Learning Approach
[7:00]
An Impossibility Theorem for Node Embedding
(ends 8:00 AM)
8 a.m.
10 p.m.
(ends 5:00 AM)
11 p.m.
SAT 4 MAY
midnight
Invited Talk:
Stefanie Jegelka
(ends 1:00 AM)
1 a.m.
1:30 a.m.
Orals 1:30-2:30
[1:30]
Mind the GAP: Improving Robustness to Subpopulation Shifts with Group-Aware Priors
[1:30]
Functional Flow Matching
[1:30]
Deep Classifier Mimicry without Data Access
[1:30]
Multi-Resolution Active Learning of Fourier Neural Operators
(ends 2:30 AM)
2:30 a.m.
Orals 2:30-3:30
[2:30]
Transductive conformal inference with adaptive scores
[2:30]
Approximate Leave-one-out Cross Validation for Regression with $\ell_1$ Regularizers
[2:30]
Failures and Successes of Cross-Validation for Early-Stopped Gradient Descent
[2:30]
Testing exchangeability by pairwise betting
(ends 3:30 AM)
3:30 a.m.
5 a.m.
Orals 5:00-6:00
[5:00]
Efficient Data Shapley for Weighted Nearest Neighbor Algorithms
[5:00]
On Counterfactual Metrics for Social Welfare: Incentives, Ranking, and Information Asymmetry
[5:00]
Joint Selection: Adaptively Incorporating Public Information for Private Synthetic Data
[5:00]
Is this model reliable for everyone? Testing for strong calibration
(ends 6:00 AM)
6 a.m.
Posters 6:00-8:30
Krylov Cubic Regularized Newton: A Subspace Second-Order Method with Dimension-Free Convergence Rate
On the Impact of Overparameterization on the Training of a Shallow Neural Network in High Dimensions
(ends 8:30 AM)