Skip to yearly menu bar Skip to main content


Poster

Meta Learning MDPs with linear transition models

Robert Müller · Aldo Pacchiano

Virtual

Abstract:

We study meta-learning in Markov Decision Processes (MDP) with linear transition models in the undiscounted episodic setting.Under a task sharedness metric based on model proximity we study task families characterized by a distribution over models specified by a bias term and a variance component. We then propose BUC-MatrixRL, a version of the UC-Matrix RL algorithm and show it can meaningfully leverage a set of sampled training tasks to quickly solve a test task sampled from the same task distribution by learning an estimator of the bias parameter of the task distribution. The analysis leverages and extends results in the learning to learn linear regression and linear bandit setting to the more general case of MDP's with linear transition models. We prove that compared to learning the tasks in isolation, BUC-Matrix RL provides significant improvements in the transfer regret for high bias low variance task distributions.

Chat is not available.