Duality-based Residual Estimation for Fully Offline Value-based Reinforcement Learning
Kohei Miyaguchi
Abstract
Value-based reinforcement learning (RL) efficiently handles high-dimensional state spaces, but existing methods lack a principled method for hyperparameter tuning without online interaction, limiting use in safety-critical and data-scarce domains. We propose the Duality-based Residual Estimator (DRE), a simple offline validation metric for value-based offline RL. DRE is compatible with standard value-based OPE and enables automatic hyperparameter selection. Our results address a key theoretical bottleneck toward fully offline value-based RL, enabling deployment without online tuning.
Successful Page Load