Poster
Minimal Expected Regret in Linear Quadratic Control
Yassir Jedra · Alexandre Proutiere
Abstract:
We consider the problem of online learning in Linear Quadratic Control systems whose state transition and state-action transition matrices and may be initially unknown. We devise an online learning algorithm and provide guarantees on its expected regret. This regret at time is upper bounded (i) by when and are unknown, (ii) by if only is unknown, and (iii) by if only is unknown and under some mild non-degeneracy condition ( and denote the dimensions of the state and of the control input, respectively). These regret scalings are minimal in , and as they match existing lower bounds in scenario (i) when [SF20], and in scenario (ii) [Lai86]. We conjecture that our upper bounds are also optimal in scenario (iii) (there is no known lower bound in this setting).Existing online algorithms proceed in epochs of (typically exponentially) growing durations. The control policy is fixed within each epoch, which considerably simplifies the analysis of the estimation error on and and hence of the regret. Our algorithm departs from this design choice: it is a simple variant of certainty-equivalence regulators, where the estimates of and and the resulting control policy can be updated as frequently as we wish, possibly at every step. Quantifying the impact of such a constantly-varying control policy on the performance of these estimates and on the regret constitutes one of the technical challenges tackled in this paper.
Chat is not available.