Skip to yearly menu bar Skip to main content


Poster

Approximate information maximization for bandit games

Christian Vestergaard · Alexandre Perez-Lebel · Danqi Liao


Abstract:

Entropy maximization and free energy minimization are general physics principles for modeling dynamic systems. Notable examples include modeling decision-making within the brain using the free-energy principle, optimizing the accuracy-complexity trade-off when accessing hidden variables with the information bottleneck principle (Tishby et al. 2000), and navigation in random environments using information maximization (Vergassola et al. 2007). Building on this principle, we propose a new class of bandit algorithms that maximize an approximation to the information of a key variable within the system. To this end, we develop an approximated, analytical physics-based representation of the entropy to forecast the information gain of each action and greedily choose the one with the largest information gain. This method yields strong performances in classical bandit settings. Motivated by its empirical success, we prove its asymptotic optimality for the multi-armed bandit problem with Gaussian rewards. Since it encompasses the system's properties in a single, global functional, this approach can be efficiently adapted to more complex bandit settings. This calls for further investigation of information maximization approaches for multi-armed bandit problems.

Live content is unavailable. Log in and register to view live content