Filter by Keyword:

39 Results

<<   <   Page 1 of 4   >   >>
Poster
Tue 14:00 Training a Single Bandit Arm
Eren Ozbay · Vijay Kamble
Poster
Tue 14:00 No-Regret Algorithms for Private Gaussian Process Bandit Optimization
Abhimanyu Dubey
Poster
Tue 14:00 Differentially Private Online Submodular Maximization
Sebastian Perez-Salazar · Rachel Cummings
Poster
Tue 14:00 Problem-Complexity Adaptive Model Selection for Stochastic Linear Bandits
Avishek Ghosh · Abishek Sankararaman · Ramchandran Kannan
Poster
Tue 14:00 Multitask Bandit Learning Through Heterogeneous Feedback Aggregation
Zhi Wang · Chicheng Zhang · Manish Kumar Singh · Laurel Riek · Kamalika Chaudhuri
Oral
Tue 16:45 Provably Efficient Safe Exploration via Primal-Dual Policy Optimization
Dongsheng Ding · Xiaohan Wei · Zhuoran Yang · Zhaoran Wang · Mihailo Jovanovic
Poster
Tue 18:30 Unifying Clustered and Non-stationary Bandits
Chuanhao Li · Qingyun Wu · Hongning Wang
Poster
Tue 18:30 A Parameter-Free Algorithm for Misspecified Linear Contextual Bandits
Kei Takemura · Shinji Ito · Daisuke Hatano · Hanna Sumita · Takuro Fukunaga · Naonori Kakimura · Ken-ichi Kawarabayashi
Poster
Tue 18:30 Bandit algorithms: Letting go of logarithmic regret for statistical robustness
Kumar Ashutosh · Jayakrishnan Nair · Anmol Kagrecha · Krishna Jagannathan
Poster
Tue 18:30 Tight Regret Bounds for Infinite-armed Linear Contextual Bandits
Yingkai Li · Yining Wang · Xi Chen · Yuan Zhou
Poster
Tue 18:30 Experimental Design for Regret Minimization in Linear Bandits
Andrew Wagenmaker · Julian Katz-Samuels · Kevin Jamieson
Poster
Tue 18:30 Federated Multi-armed Bandits with Personalization
Chengshuai Shi · Cong Shen · Jing Yang