Skip to yearly menu bar Skip to main content


Poster

Online Learning of Decision Trees with Thompson Sampling

Ayman Chaouki · Jesse Read · Albert Bifet

MR1 & MR2 - Number 19
Student Paper Highlight Student Paper Highlight
[ ]
[ Poster
Fri 3 May 8 a.m. PDT — 8:30 a.m. PDT
 
Oral presentation: Oral: RL & Optimization
Thu 2 May 1:30 a.m. PDT — 2:30 a.m. PDT

Abstract:

Decision Trees are prominent prediction models for interpretable Machine Learning. They have been thoroughly researched, mostly in the batch setting with a fixed labelled dataset, leading to popular algorithms such as C4.5, ID3 and CART. Unfortunately, these methods are of heuristic nature, they rely on greedy splits offering no guarantees of global optimality and often leading to unnecessarily complex and hard-to-interpret Decision Trees. Recent breakthroughs addressed this suboptimality issue in the batch setting, but no such work has considered the online setting with data arriving in a stream. To this end, we devise a new Monte Carlo Tree Search algorithm, Thompson Sampling Decision Trees (TSDT), able to produce optimal Decision Trees in an online setting. We analyse our algorithm and prove its almost sure convergence to the optimal tree. Furthermore, we conduct extensive experiments to validate our findings empirically. The proposed TSDT outperforms existing algorithms on several benchmarks, all while presenting the practical advantage of being tailored to the online setting.

Chat is not available.