Skip to yearly menu bar Skip to main content


Poster

Reinforcement Learning for Mean Field Games with Strategic Complementarities

Kiyeob Lee · Desik Rengarajan · Dileep Kalathil · Srinivas Shakkottai

Keywords: [ Algorithms ] [ Theory ] [ Learning Theory ] [ Active Learning ] [ Models and Methods ] [ Multi-agent systems ]


Abstract:

Mean Field Games (MFG) are the class of games with a very large number of agents and the standard equilibrium concept is a Mean Field Equilibrium (MFE). Algorithms for learning MFE in dynamic MFGs are unknown in general. Our focus is on an important subclass that possess a monotonicity property called Strategic Complementarities (MFG-SC). We introduce a natural refinement to the equilibrium concept that we call Trembling-Hand-Perfect MFE (T-MFE), which allows agents to employ a measure of randomization while accounting for the impact of such randomization on their payoffs. We propose a simple algorithm for computing T-MFE under a known model. We also introduce a model-free and a model-based approach to learning T-MFE and provide sample complexities of both algorithms. We also develop a fully online learning scheme that obviates the need for a simulator. Finally, we empirically evaluate the performance of the proposed algorithms via examples motivated by real-world applications.

Chat is not available.