Filter by Keyword:

85 Results

<<   <   Page 1 of 8   >   >>
Poster
Tue 14:00 No-Regret Algorithms for Private Gaussian Process Bandit Optimization
Abhimanyu Dubey
Poster
Tue 14:00 Multitask Bandit Learning Through Heterogeneous Feedback Aggregation
Zhi Wang · Chicheng Zhang · Manish Kumar Singh · Laurel Riek · Kamalika Chaudhuri
Poster
Tue 14:00 An Efficient Algorithm For Generalized Linear Bandit: Online Stochastic Gradient Descent and Thompson Sampling
QIN DING · Cho-Jui Hsieh · James Sharpnack
Poster
Tue 14:00 Problem-Complexity Adaptive Model Selection for Stochastic Linear Bandits
Avishek Ghosh · Abishek Sankararaman · Ramchandran Kannan
Poster
Tue 14:00 Differentially Private Online Submodular Maximization
Sebastian Perez-Salazar · Rachel Cummings
Oral
Tue 16:15 Federated Multi-armed Bandits with Personalization
Chengshuai Shi · Cong Shen · Jing Yang
Oral
Tue 16:45 Provably Efficient Safe Exploration via Primal-Dual Policy Optimization
Dongsheng Ding · Xiaohan Wei · Zhuoran Yang · Zhaoran Wang · Mihailo Jovanovic
Poster
Tue 18:30 Experimental Design for Regret Minimization in Linear Bandits
Andrew Wagenmaker · Julian Katz-Samuels · Kevin Jamieson
Poster
Tue 18:30 Bandit algorithms: Letting go of logarithmic regret for statistical robustness
Kumar Ashutosh · Jayakrishnan Nair · Anmol Kagrecha · Krishna Jagannathan
Poster
Tue 18:30 Corralling Stochastic Bandit Algorithms
Raman Arora · Teodor Vanislavov Marinov · Mehryar Mohri
Poster
Tue 18:30 Parametric Programming Approach for More Powerful and General Lasso Selective Inference
Vo Nguyen Le Duy · Ichiro Takeuchi
Poster
Tue 18:30 Tractable contextual bandits beyond realizability
Sanath Kumar Krishnamurthy · Vitor Hadad · Susan Athey